Quick Summary
A practical approach to using full-population issue data during audit planning to improve scoping, risk focus, and management engagement through standardized issue analytics.
Technique Breakdown expand
- Type
- Population Analysis
- Audit Phase
- planning
- Data Required
- Full issue register export from GRC system: issue ID, severity, source, status, dates, root cause, functional org, repeat-issue flag
- Tools
- SQL, Excel, Python (optional — for thematic clustering)
- Primary Benefit
- Replaces inconsistent manual issue review with a standardised analysis that directly informs scope, risk focus, and management conversations before fieldwork begins
- Common Pitfalls
- Filtering to open issues only — the full historical population is where patterns emerge
- Treating root cause categories as ground truth without validating against issue descriptions
- Conflating issue volume with risk severity — always weight by priority, not count alone
- Using thematic clustering output without human validation of cluster labels
One of the most underutilized data assets in audit planning is the issue population itself. Not just open issues, but the full history of issues, their severity, root causes, and detection sources. When analyzed systematically, issue data does more than provide context. It reshapes where audit teams focus, how they scope, and how confidently they engage management from day one. By providing data-driven insights prior to finalizing the audit program, the audit team can be prepared to ask better questions of management and better focus the audit on areas of highest impact.
Before I go into the details of how we perform issue analytics, the benefits it has brought, and the pitfalls we have encountered, let me give you some historical context of how issues have historically been considered in planning. During planning, the AIC would go to our GRC system and run a report of all open issues and export it to Excel. Then they would have to manually filter down to the issues they believed covered the entity being audited. It was messy, manual, and inconsistent between teams and audits. Once the report had been filtered, the team would review the issues and management action plans and follow up with management on the progress of remediation. Obviously there are better insights to be had if we standardize issue analysis.
In order to perform useful issue analysis, we need to start with the right dataset. Instead of starting with open issues, we decided we need to look at all issues. The data we need to query includes:
- functional org (which gets mapped to audit universe)
- issue ID
- issue description
- issue severity
- issue source
- issue status/phase
- issue owners
- create date
- due date
- management close date
- issue close date
- repeat issue flag
- amended due date
- root cause
- management action plan
From this data we have a complete data set to pull issues applicable to any given audit by filtering on functional org. If we provided this population as is, it would already be better information than what is currently considered in audit planning. However there are more insights we can provide from this data. First we trend issues over time, but we also call out percent of issues that are high priority so we can answer “Are there more issues and are they more severe?”
We also take a look at the source of issues over time. As we have matured as an organization, emphasis has been placed on the lines of business to self-identify issues and not wait for an audit or other second-line activities to identify issues. At its core, this analytic is answering “How is risk being detected within this auditable entity, and is the control environment mature enough to self-identify issues rather than rely on audit or second-line discovery?”
Using the date fields, we perform an aging analysis of open issues by issue priority so that we can answer “Is there a buildup of unresolved risk, especially high-severity issues?” A large backlog of aging high risk issues should directly affect our audit scope.
Finally, we can extract insights from the root cause categories and issue descriptions to explain “What systemic control themes are driving issues (process, system, people, governance, etc.)?” and “Are there risk themes or recurring control failures not formally categorized?” These two complementary analytics answer some very insightful planning questions. Distribution of root cause domains tells us is issues are process, system, or governance heavy. If issues are process heavy, we should have an operational walkthrough focus; system heavy, we should take a look at IT controls; governance heavy, we should put extra emphasis on consideration of oversight and monitoring activities. When we couple the root cause distribution with issue priority we can better focus on the high risk issue root causes. Using python, we can perform thematic clustering on the issue descriptions. The reason we would want to look at issue descriptions when we already have categorized root causes is that the judgement of the root cause assignment might not reflect deeper trends. For example, if the root causes for issues in a particular auditable entity shows a concentration in the human error domain but thematic clustering of the issue descriptions include things like “manual upload” or “excel spreadsheet” then we can see that the deeper root cause is likely process design over human error.
Issue analytics is not a flashy technique, but it is one of the most practical ways to improve audit planning consistency. The data already exists in most GRC systems. The difference is whether it is reviewed manually and inconsistently, or analyzed systematically to inform scope, risk focus, and management engagement. In our experience, standardizing this one analytic has produced more visible value during planning than many of the more complex analytics performed later in fieldwork.
This article is Part 1 of the Planning Analytics series. For a full index of audit analytics techniques covered on this site, see the Practitioner Guide.