AI Goes Mainstream: What Enterprises Need to Know

AI Goes Mainstream: What Enterprises Need to Know

As generative AI adoption increases, it’s critical that your organization takes advantage of AI capabilities in a responsible, secure manner. Increasingly, teams are leveraging opportunities to use AI in a compliance context to generate first drafts of risk and control descriptions, while reducing manual toil for busy teams. AuditBoard’s CISO, Richard Marcus, shares AI use cases for audit, risk, and compliance teams, and how organizations can take a responsible approach to AI use to encourage innovation while mitigating risk.

Watch the full conversation, and read the can’t-miss highlights below.

Richard Marcus shares his experiences developing responsible AI use policies to support innovation while mitigating AI risks.

Governance, Risk, and Compliance Use Cases for AI

To focus on GRC use cases, some of the work we do in AuditBoard focuses on reducing manual toil for busy teams. For instance, if you’re trying to draft new controls, AI can generate descriptions based on a title or risk name, and help you develop your risk register to help you develop your control library. 

Consider how compliance teams map requirements from one framework to another. This requires analysts to go line-by-line through documents and look for similar items to associate them with another. It’s a very tedious, manual process to go line-by-line and figure out what is similar and what is different. Early on in my career, it might have taken me two whole weeks to do that work.

Now, we’re seeing that generative AI can do that work more efficiently and quickly. Maybe it won’t be perfect — but AI can automate the first 90%, for instance, and have a human user finish and polish the final 10%, do a quality control check, and finish the final product. So, we’re seeing a lot of opportunities across audit, risk, and compliance in that area. This technology can help automate tasks, keep your team engaged, free them up to do high-value, strategic work, and leverage more efficient, accurate processes. 

Mitigate the Risk of AI Usage

When audit, risk, and compliance professionals think about embracing new technology, mitigating the associated risks is top of mind. If you’re a CISO like me, you might ask: is AI really my problem? 

Technically, AI is a connected risk problem that folks across the organization need to work together to solve. However, there’s no reason you can’t lead the charge — especially since the biggest gap is in training and awareness. You can be somebody that educates the organization on what the risks are. This process usually starts with an assessment to identify how AI presents a risk to your business, likely misuses of technology that you should anticipate, and how to work cross-functionally to develop policies, controls, and procedures for these risks. 

This process is very similar to mitigating any other type of emerging risks. Someone has to identify, define, and marshal cross-functional leaders to develop a posture to defend against and mitigate those risks.

As an example, here’s how AuditBoard is working to mitigate AI risks internal to our organization. One of our goals is to identify reasonably, potential misuse of this technology. A primary risk is folks not understanding how to responsibly use AI. Here are some questions to consider: 

  • Do you understand the limitations of AI? 
  • Are you over-reliant or overconfident in the output of AI? 
  • Is there a risk of information leakage, intellectual property leakage or other sensitive data leakage into third party AI systems or open source AI systems? 

When developing responsible AI use policies, it’s crucial to avoid people making decisions based on raw output from AI. People should have high confidence in the data they feed AI models, understand the limitations of those models, and use discretion when assessing the output to make decisions. If you’re developing products that incorporate AI capabilities, you should be thinking about how do we help the end user through this cycle so they can build trust in the output of the AI model and document how they’ve made certain decisions.

Artificial Intelligence Is an Emerging Threat Vector

AI is a force for good. It’s also a force for evil. Like any other technology advancement in human history, it’s here to stay. We have to apply the same principles we’d use for any other technology: risk management, governance, access management, transparency, logging, monitoring, and so forth. It’s critical to start those conversations now — AI is an emerging threat vector, and we can expect government regulators to hold us accountable to treat it as such.

Looking for more thought leadership? Check out our on-demand webinar library for more leaders and experts discussing timely issues, insights, and experiences.