Expert Insights: Navigating the New Era of AI

Expert Insights: Navigating the New Era of AI

As artificial intelligence (AI) continues to revolutionize industries worldwide, organizations must understand its implications for data privacy, risk, and governance. Damaris Fynn (Americas Risk Analytics Leader, EY) hosts a lively conversation with Samta Kapoor (Responsible AI and AI Energy Leader, EY) and Sarah Liang (Americas AI Risk Leader, EY) on the multi-faceted realm of AI, breaking down misconceptions and highlighting the emergent risks it presents, including: 

  • What AI actually is and the risks associated with its implementation
  • Recognize the impacts of new AI regulations on data privacy, risk, and governance
  • Apply the framework for implementing responsible AI within their respective organizations
  • Explain the regulations that impact responsible AI and how to navigate them
Damaris Fynn, Samta Kapoor, and Sarah Liang discuss the multi-faceted realm of AI.

What is artificial intelligence? 

Samta Kapoor (Responsible AI and AI Energy Leader, EY): There are many different ways to use, program, and develop artificial intelligence to get the insights you need. Think about artificial intelligence as the umbrella, with machine learning as a subset, and deep learning as a subset of machine learning. 

Artificial intelligence has been around since the 1940s. This is not new. This technology has been around for a long time in many different ways, shapes, and forms. However, three things changed in November 2022. The first change was a democratization of AI in a way that we’ve never seen before. You no longer need to be a data scientist to play with AI. With ChatGPT on your phone, you can access a lot of insights with readily accessible technology. The second change was the amount of investment in this field. There has been a 429% increase in AI investment. The third change is the emergent capabilities of this technology. As the phrase generative AI suggests, this technology can now generate text, video, and audio.  

What are the risk considerations associated with artificial intelligence? 

Sarah Liang (Americas AI Risk Leader, EY): There’s been an uptick in AI-generated frauds. Scammers can use machine learning and data scraping to use generative AI to copy how you speak and write. As the world opens up to AI, you can’t always trust what you’re getting, which underlines a need for validation processes. AI is used for decision-making, such as banks creating loan applications and leveraging AI to decide who gets a car loan, for instance. If AI isn’t trained fairly and transparently, it makes decisions based on biases like gender, race, sexual orientation, and more. These are important considerations: who has your data? Do you still own your data? How could it be misused? Whether it’s your image, your name, or your ID, these are important considerations. 

Tell us more about responsible AI as a framework, and how internal audit functions can apply that. 

Sarah Liang (Americas AI Risk Leader, EY): For starters, AI isn’t a standalone topic. As you consider the role of internal audit, they help the company mitigate and monitor risk, with a place in the overall enterprise risk assessment and risk management frameworks. You have to consider a lot of questions: 

  • What is the company strategy for AI?
  • What is the short-term view?
  • What is the long-term view?
  • How are you deploying AI? 
  • How are you leveraging AI to compete in the marketplace?
  • How do you create a business advantage for yourself?
  • Are you using AI to protect cybersecurity as a key risk?
  • What are you doing to protect your assets, your IP, and your employees

This all adds up to one primary question: are you leveraging AI to drive the cost of compliance? Responsible AI means that you consider AI regulations to ensure they’re reliable, fair, and beneficial for your employees. Companies have a responsibility to regulators, which includes a need to comply with key requirements as rolled out by standard boards or the government. 

Companies also have a responsibility to customers. Every company has a fiduciary duty to customers and key stakeholders. Lastly, you have a responsibility to your employees. What are you doing to ensure they have the right access to upskill public AI to make it more efficient and productive? 

Then, companies can hone in on a responsible AI model with nine pillars. These nine principles talk about ensuring your AI is accountable. Is it fair? Is it explainable? Does it protect your privacy? A lot of us are used to testing tech in the SOX world. With SOX, you can baseline, benchmark, and move on. AI is different–you can’t code it and forget it. AI evolves on its own since it is generative. 

How do you use a responsible AI framework to test bias?

Samta Kapoor (Responsible AI and AI Energy Leader, EY): It gives us a lot of actionable decisions that we can make with the use case that we’re solving for. However, there are risks associated with that, across the entire lifecycle of AI. 

However, it’s important to underline why you should be thinking about responsible AI, bias, and fairness from the design stage. Relying on regulatory intervention after the fact isn’t enough. For instance, companies can face severe reputational loss if they don’t have responsible AI principles in place. These principles must be validated by the C-suite, but also by the data scientists who are developing them. 

What does fairness mean when a CEO talks about it? What does fairness mean when a data scientist talks about it? How do you bridge that gap? These are very, very important questions. Strong governance is necessary because humans building AI and inputting data come with unconscious bias. This is then translated by the algorithm. We have to continuously monitor bias during the lifecycle of AI.

Looking for more thought leadership? Check out our on-demand webinar library for more leaders and experts discussing timely issues, insights, and experiences.