AI Gover-nuance: Don’t Play it Too Safe

Hadas Cassorla

March 19, 2025

AI Gover-nuance: Don’t Play it Too Safe

Imagine being a grocery store owner a decade ago and hearing about “self-checkout” for the first time. The idea might feel appalling, especially after 40 years spent running a business focused on high-touch customer service. But then competitors begin rolling out self-checkout, and customers soon adapt and even start to prefer it. A single employee can now oversee six checkout lanes, drastically cutting overhead in an industry with notoriously slim margins.

Then COVID-19 hits, and no one wants face-to-face contact, even behind masks. Your competition is ready for this shift, but your own business, already barely scraping by, starts to flail.

There are a multitude of similar scenarios when new technology enters the market. One prominent example is the Luddite movement, which opposed the industrialization of the textile industry. The Luddites protested the adoption of technology because it threatened their jobs. Eventually, new jobs emerged, but their movement has become synonymous with resisting technological change.

And now we have artificial intelligence. Most businesses are adopting it or scrambling to figure out how to embrace it. Chatbots are a ubiquitous entry point, but AI’s impact can stretch far beyond automated conversations.

Don’t Be A C-Level Luddite

AI coding apps can help a layperson build an entire platform, from drafting front-end components to orchestrating complex back-end logic. Healthcare is already using AI to predict disease outbreaks with startling accuracy, personalizing treatment plans to potentially save countless lives. Meanwhile, manufacturers can anticipate equipment failures before they occur, slashing costly downtime and boosting production efficiency. In short, AI is upending every industry.

Your company knows this. Your devs know this. Your marketing team knows this. And they are excited about discovering what AI can do to streamline drudge work, ship faster, and empower the company to move with greater agility—ultimately disrupting industries and setting new trends.

Then CISOs come in and say, “No!”

AI can be frightening. Used poorly, thoughtlessly, or nefariously, it could be the downfall of a company.

“I just uploaded our customer database into ChatGPT to get target demographic correlation for our next marketing drive.”

“I put our financial data into Grok to help build these amazing charts and predictive analyses.”

“I ran our employees’ personal medical data through an AI tool to predict their likelihood of taking sick leave.”

In comes the Department-of-No-CISO and that dreaded word: “Governance.”

As CISOs, our responsibility is to ensure that the company is safe, that the data is protected, and that our customers can trust us. So we clamp down. We’re tech leaders, and yet we become C-level Luddites.

However, the CISO’s bigger responsibility is the company’s overall health—a company that fails has no data left to protect. And a company that is not experimenting with AI right now might not be in business in five years—or maybe three.

My advice—my hot take—be more permissive: implement governance-lite.

That’s right, I want you to wholeheartedly embrace AI in your environment. Don’t just grudgingly allow it, but encourage it as a catalyst for innovation and progress. Instead of defaulting to strict governance, think of implementing lean, targeted controls that let your teams explore new ideas and pivot quickly. Be open to experimentation, encourage creativity, and allow room for failure. This agility will help your organization stay ahead, adapt faster, and ultimately secure a more robust future in an AI-driven world.

Empower Your Team to Explore

As CISOs, we have a duty to mitigate risk in our environment—ensuring that threats to data integrity and business continuity are minimized. But in a rapidly evolving technological landscape, the greatest risk may well be standing still. AI isn’t just another tool; it’s fundamentally reshaping how businesses operate, innovate, and compete. 

Overly restrictive AI governance is not just going to prevent your company from innovating, it will also lead to employees using AI behind your back. By preventing AI use you can create an unsafe, unchecked, shadow AI environment. This stealth adoption can lead to improper data sharing, hidden incidents or breaches, and a lack of insight into how data is being used. Attempting to inhibit AI use will cultivate an environment of blind spots and vulnerabilities.

Failing to keep up with AI’s development puts your organization at risk of falling behind its peers, missing critical opportunities, and ultimately jeopardizing your market position. Your obligation is to strike a delicate balance: vigorously protect your organization’s assets while empowering teams to explore AI-driven solutions that propel the company forward in a safe, responsible, and ultimately successful manner.

In order to accomplish this you have to: 

  • Make sure you know your company’s risk appetite.
  • Know where your crown jewel data resides.
  • Have awesome vulnerability and asset management.
  • Teach your team about public vs. private AI.

Once you do all this, you can instill governance that won’t impede progress. For all crown jewel data, implement protective measures to prevent data loss and public AI access. For teams that want to build around these crown jewels, you can do two things.

Set Up Privately Hosted AIs 

By hosting AI services in an environment you fully control—whether on-premises or in a private cloud—you ensure sensitive data never leaves your domain. This approach not only maintains data privacy and compliance but also grants you greater autonomy over how the AI models are configured, trained, and deployed. As a result, your teams can safely leverage internal data for training and fine-tuning models, unlocking AI’s potential for deeper insights and efficiency without compromising on security or compliance.

Create an AI Test Environment

You already have dev, test, and prod environments, so why not create an AI instance? In this environment, you can deploy fake data using any of the many tools that can generate synthetic test data.

By treating AI as an independent “sandbox” environment, you ensure that new AI-driven initiatives and experiments are isolated from your production systems. This approach allows you to explore and validate AI use cases without risking disruption to business-critical operations or exposing real sensitive data. Synthetic test data provides a safe way to experiment with different scenarios, models, and outcomes—empowering your teams to refine AI strategies and workflows before integrating them into your core environments.

Anything that is not crown jewels? Let your people cook, with one caveat: teach them all about data and AI. Be the guide, not the gatekeeper. Be open to questions like, “Is it OK if I put everyone’s medical charts into DeepSeek just to see what insights we get?” You may be tempted to say no, but this is an opportunity to highlight your alternate instance and to educate your users about public vs. private AI.

Conclusion

As a security leader, being permissive with AI isn’t just an alternative—it’s a competitive necessity. Overly strict controls can crush the creativity and experimentation that unlock AI’s true potential. The job of a CISO is to mitigate risks, and a huge risk to the company is losing to other companies more willing to leverage new technology. While protecting sensitive data remains non-negotiable, embracing innovation signals to teams that they can explore, iterate, and imagine. By focusing on targeted guardrails instead of broad restrictions, you encourage both growth and secure AI adoption, ensuring your organization thrives in a landscape where playing it too safe is the bigger risk.

Hadas Cassorla

Hadas Cassorla, JD, MBA, CISSP has a lot of letters after her name, but the three letters she cares the most about are Y-E-S. Marrying her improv and legal background into technology and business, she helps organizations build strong, actionable and implementable security programs by getting buy-in from investors, the boardroom and employees. She has founded her own business, Scale Security Group, and has built corporate security offices from ground-up.

Read More From Hadas Cassorla

Discover Why AuditBoard Is
Top-Rated by Customers

Schedule a Demo