Have you thought lately about why so many audit departments assign an overall rating to audit reports? In this article, Chris Patrick, Vice President of Internal Audit at RoundPoint Mortgage Servicing Corporation, breaks down why his audit team stopped rating audit reports, and improved relationships with process owners through a collaborative approach to rating individual observations.

Is The Use of an Overall Audit Report Ratings Required?

Assigning an overall rating to an audit report is a time honored tradition — but has its time finally run out? The practice of assigning an overall rating such as “Satisfactory,” “Needs Improvement,” or “Unsatisfactory” to an audit report is a holdover from audit departments of generations past, but it is not specifically required by the IIA’s International Standards for the Professional Practice of Internal Auditing (Standard 2410.A1). It’s time we took a closer look at what this common practice actually achieves. 

Why Do So Many Audit Departments Assign an Overall Rating to Audit Reports?

Assigning overall ratings makes audit reports easily digestible for management, but in my view the practice was historically designed to get a seat at the table with alarmist “Unsatisfactory” ratings at a time when audit wasn’t seen as the value-add department that it is today. As internal audit is increasingly earning its leadership role in the organization through valuable consulting work, rating audit reports may be an outdated practice that actually detracts from the content of a report.

Why I Switched From Assigning an Overall Audit Rating to Rating Each Observation

In this article, I make a case for why my internal audit department at RoundPoint Mortgage switched from assigning an overall rating to rating each observation, and break down best practices for starting the transition in your own department. 

1. Overall Report Ratings Add Unnecessary Subjectivity

Internal audit’s mission is to provide objective assurance, but providing overall risk ratings introduces a level of subjectivity into the audit report. As an example, let’s say you audited 20 control areas of a company’s payroll function, and found one high-risk observation in which an employee got paid after they left the organization. How do you fairly represent the effectiveness of the entire function with a single rating?  

Granted, the one high-risk observation should have never happened. Do you rate the entire function as “Unsatisfactory” if even a single high risk issue is identified, provoking the process owners? Do you rate the entire function “Satisfactory” and potentially obscure that high-risk issue? Do you split the difference with “Needs Improvement” for every audit with mixed results? Some of the important complexity of an audit report gets lost in any holistic rating. 

Rating Individual Observations Helps to Maintain Objectivity

My team strives to remove a level of subjectivity by assigning a risk rating to each individual observation in the report. We always put the highest risk issues at the front of the report because they warrant more attention, and work our way toward the lowest. We rely on the reader to read it and understand the contents to draw their own conclusions about the overall effectiveness of the area audited. 

2. Overall Report Ratings Delay Audit Reports

Writing a clear, well-structured audit report can be the hardest part of any audit. It takes time and effort to lay out observations in a logical order and in a format that a layperson can easily understand. Yet, after taking the time and effort to create this report, it can get bogged down in a debate with process owners if they disagree with the overall rating you’ve assigned. If it’s a “Needs Improvement” or “Unsatisfactory” rating, that directly reflects on the people who perform those functions on a daily basis. The process owners are likely to take a defensive stance and argue that the rating obscures the many things that passed. The auditor must put their reputation on the line to argue that the entire function is unsatisfactory because there is a single issue. This kind of standoff takes valuable time to resolve. 

Rating Individual Observations Gets Reports Out More Quickly, Cleanly, and Collaboratively

Rating individual observations according to a framework enables us to get the report out faster because we have less pushback from process owners. When our department implemented individual observation ratings, we invested time early on to document different ratings that align with COSO guidance. We evaluate the likelihood, impact, vulnerability, and velocity of each individual observation. Because we have clear definitions for these ratings, we can have a substantive dialogue with the process owner that tends to move quickly. We compare the issue to our documented framework, which lets us can ask a process owner if they agree with the definition that one issue is low likelihood because it happened once in 10,000 transactions, or that another issue has a medium impact because the company lost more than $10,000. With a documented framework to assign each risk rating, the conversation with process owners is faster and garners more cooperation.

3. Overall Report Ratings Influence the Reader 

The introduction of a subjective overall rating detracts from the content of the report because it instantly influences the reader’s perception of the area audited. In some cases, I’ve seen the intended audience not read the report — why bother when the performance indicator is right there in front? This increases the likelihood that the intended audience will overlook significant audit findings that are not represented by the overall rating. Those who do dive into the details will likely have had their perception of the results skewed by the overall rating on the first page of the report — whether positively or negatively. Additionally, it is difficult to have a substantive conversation about a report with someone who has only read the rating on the front, impeding the aim of the report to communicate the audit findings.

Similarly, overall ratings can unintentionally persuade external parties. My organization operates in a highly regulated industry with examiners at the federal and state level. Our regulators frequently review our audit reports, and it would be a shame if they drew a conclusion about the effectiveness of the entire company based on the overall ratings assigned without actually looking at the content of the report.

Rating Individual Observations Encourages Active Comprehension

The audience of audit reports are busy individuals, but that shouldn’t preclude them from wanting to read the report to understand the contents. I’d prefer they drew their own conclusions based on the information provided. Our practice of rating individual observations has led to well-informed discussions by all parties because in order to understand the results of the audit, they had to read the report.

4. Overall Report Ratings Position Internal Audit as a Policing Function

We all know the stereotypes of internal audit as a policing function, and it is my opinion that the practice of rating overall reports has contributed to audit’s bad reputation. When audit slaps an unsatisfactory rating on an audit report with one or two high-risk issues and many passing controls, they can be viewed as alarmist — because, in essence, an “Unsatisfactory” rating is designed to alarm! The back and forth bickering that commonly follows any less-than-stellar overall rating needlessly creates an antagonistic relationship between internal audit and the business, and can dissuade process owners from sharing future issues that internal audit could otherwise help to fix.  

Rating Individual Observations Positions Internal Audit as a Value-Add Partner 

Assigning risk ratings individually goes a long way to diffuse this recurring negative interaction, and helps show that internal audit aims to work with the people who run the business to improve the organization. If I’m not here to stick a label on an entire function, we have the opportunity to work together to improve individual problems. I can raise issues that the process owner wasn’t aware of, and then work with them to apply a risk rating through our documented framework to help get them the resources they need to address it. In my experience, this collaborative approach has led process owners to find more value in our audits, and to be more willing to volunteer information about control weaknesses in future audits. 

I’m not advocating that we do away with the concept of rating entirely — I just want to avoid a holistic rating that obscures some of the significant findings in an audit report. I’ve found that rating individual observations in audit reports can make a huge difference in the perception of audit as a value-add partner. When you approach an audit with the intention that the rating for each issue should drive the response, your process owners will see a more positive impact from your audit findings. Now, we have process owners coming to internal audit to ask for our help in a consulting capacity because they trust us. We’re not an antiquated policing function. We’re not here to get them in trouble. We’ve enhanced internal audit’s brand by demonstrating that we will work with them to improve the business because we have their best interests at heart. 

Chris

Chris Patrick, CIA, is the Head of Internal Audit and Sarbanes-Oxley (SOX) at Sunlight Financial, and previously led audit teams at Figure and RoundPoint Mortgage Servicing Corporation. He is currently a member of the Board of Governors with the Charlotte Chapter of the IIA, and has served as President of the Northern Virginia Chapter of the IIA. Connect with Chris on LinkedIn.