Expert Insights: How AI Radically Reshapes Audit, Risk and Business Strategy
IDC predicts that global spending on AI will continue to accelerate to over $301 billion by 2026. To remain competitive in their markets, companies must prepare to manage AI risk and leverage its capabilities. Happy Wang (Chief Development Officer, AuditBoard) hosts a lively discussion between Melissa Pici (Senior IT Audit Manager, Syniverse), Matthew Yoshida (GRC Lead at OpenAI) and Anton Dam (Vice President of Data AI and ML, AuditBoard) on how AI impacts audit, risk, and compliance at their organizations, including:
- How to leverage AI in your department as a GRC leader
- Assess the potential future of AI
- Problem-solving roadblocks presented by artificial intelligence
Watch the full conversation, and read the can’t-miss highlights below.
Matt, how is AI used in your department as a GRC leader?
Matthew Yoshida (OpenAI): Part of OpenAI’s mission is research, which means we’re constantly pushing the bar on technology. There are lots of really cool use cases, but when you think about how to effectively use them in a GRC context, the key thing is auditability. I don’t think we’re ready to have a completely autonomous AI model taking over controls. You still need a human in the loop at this point, which means it’s critical to ensure your employees understand there are approvals specific to certain use cases when leveraging this technology.
Anton, as an AI specialist, can you share your vision for what is coming with AI?
Anton Dam (AuditBoard) We can use AI as a capacity multiplier to supercharge work efficiency. I need to make the most of my time and get the most value out of it. Clarity, impact, and speed are top priorities, and that’s what the audit profession needs from AI technology. In the audit world, there are lots of repetitive tasks. We hope that using technology will help us do this faster.
Melissa, what are some roadblocks you’ve encountered with AI and what are you most excited about going forward?
Melissa Pici (Syniverse): You can’t throw something out there and expect people to learn on their own. You have to hone in on your risk culture so people understand what happens when they take certain actions. For instance, when ChatGPT first came out, someone from a major company put their proprietary code into it. Now, that’s out in the wild. Sometimes, people don’t consider the risks. It’s critical to make sure employees and executives are educated about AI.
What I’m most excited about is being able to automate the most mundane tasks and get minutiae off my plate. Sometimes, repetitive tasks still take me three or four hours to do. I’d rather just give it to a machine and figure out how to automate and normalize that task. The biggest thing is taking those admin-level tasks off my plate so I can do what I’m paid to do. I feel a strong sense of responsibility when it comes to AI. We are at the forefront of risk, which means we have the ability to not just affect our companies, but also society.
Looking for more thought leadership? Check out our on-demand webinar library for more leaders and experts discussing timely issues, insights, and experiences.