Get the Best of Data Leadership
Stay Informed
Get Data Insights Delivered
Joan Pepin is Bigeye's Chief Information Security Officer, bringing 27 years of cybersecurity leadership to our team. She's held CISO and CSO roles at companies like Sumo Logic, Auth0, and Nike Digital, and co-founded her own security startup. As AI becomes central to data operations, she's been at the forefront of developing risk frameworks that protect businesses while enabling innovation.
I've been in cybersecurity for thirty years, and I've seen plenty of technology waves come and go. But what's happening with AI right now is different. The speed of adoption is unlike anything I've witnessed (even cloud adoption felt a lot slower than this) and the gap between deployment and risk management is definitely concerning.
Here's the reality check:
According to a 2025 SandboxAQ AI Security Benchmark Report, 79% of organizations are already running AI in production, but only 6% have put in place a comprehensive security strategy designed specifically for AI¹.
Think about that gap for a moment. We have nearly eight out of ten organizations deploying AI systems that can make autonomous decisions, handle sensitive data, and directly impact operations—yet only six out of a hundred have bothered to create an AI-specific security strategy. We're essentially building on quicksand and hoping the foundation holds.
If you're not managing AI risk, you're not managing AI: you're chasing hype.
The Promise of AI and the Problem of Risk
Look, I'm (no longer!) an AI skeptic. Far from it. AI can boost productivity, automate tedious tasks, and help teams move faster. I've seen it transform how we approach security operations, customer service, and product development. The potential is real, and the business value is measurable.
But none of that matters if you can't trust what the AI is doing or explain how it got there.
The biggest threat to AI success isn't model failure — it's unexamined risk. When I talk to CISOs and board members, I ask them simple questions: Where does your model's data come from? Who has access to the outputs? Can your auditors verify the decisions? The silence that follows is deafening.
Only 58% of respondents have completed a preliminary assessment of AI risks in their organization².
Think about that. We're deploying technology that can make autonomous decisions affecting customers, revenue, and reputation—and nearly half of us haven't even done a basic risk assessment.
Four Categories of AI Risk Most Teams Are Overlooking
Through my work with Edge Delta and conversations with peers across the industry, I see four critical risk categories that organizations consistently underestimate or ignore entirely:
1. Security Risk: When Machines Get Privileges
AI agents and non-human identities often operate with real privileges but minimal oversight. We've spent years implementing zero trust for human users—multi-factor authentication, least privilege access, continuous monitoring. But what about our AI systems?
40% experienced data security incidents from the use of AI applications in 2024, nearly doubling from 27% in 2023³.
These aren't hypothetical risks. They're happening right now, in production environments, affecting real organizations.
I recently reviewed an environment where an AI agent had read-write access to application configurations. When I asked about access controls, the response was, "Well, it needs that access to function." That's not risk management; that's wishful thinking. We need to govern machines with the same rigor we apply to users — perhaps more, since machines don't get tired, don't take breaks, and can operate at superhuman speed when compromised.
2. Privacy Risk: LLMs Never Forget
LLMs don’t always forget. If sensitive data is provided without privacy controls, it may be stored or used for training and could later be exposed through memorization, logging, or downstream fine‑tuning
43% of companies are focused on preventing sensitive data from being uploaded into AI apps³.
But prevention is only part of the equation. What about the data that's already there? What about the models already trained on your proprietary information?
I've seen organizations accidentally expose customer PII, financial records, and strategic plans through poorly configured AI systems. The scary part is that unlike traditional data breaches where you can identify and contain the exposure, with AI, the contamination can be baked into the model itself.
3. Operational Risk: Garbage In, Catastrophe Out
Poor data quality leads to hallucinations and unreliable decisions. If the data's wrong, the AI's wrong. But it's worse than that—AI can amplify bad data in ways that create cascading failures across your organization.
62% of organizations have deployed an AI package with at least one known vulnerability CVE⁴.
While most vulnerabilities are currently low to medium risk, a single vulnerability can create a critical attack path. We're building complex systems on foundations we don't fully understand, using packages we haven't properly vetted.
The operational risk isn't just about accuracy—it's about reliability, consistency, and predictability. When your AI makes a decision, can you trace it back? Can you explain it to a regulator? Can you reproduce it six months from now?
4. Reputational and Legal Risk: Who Takes the Fall?
AI models can produce biased or unexplainable outputs. When the AI gets it wrong, who's responsible — your brand, or the model?
84 percent believe that an independent audit of their AI models will be a requirement within the next one to four years⁵.
The regulatory landscape is evolving rapidly. The EU AI Act is already in effect. Other jurisdictions are following suit. But regulations are lagging behind deployment, creating a dangerous gap where organizations operate without clear guidelines or standards.
The State of AI Risk Management: A Reality Check
Let me share some sobering statistics that highlight just how unprepared we are:
Forty-seven percent say their organizations have experienced at least one consequence from gen AI use⁶.
Nearly half of organizations using AI have already experienced negative consequences. This isn't a future risk—it's a present reality.
AI is still treated as a technical initiative, not a business-wide transformation. Risk and compliance teams are often brought in too late—after things break. I've seen this movie before with cloud adoption, and we're making the same mistakes, only faster and with higher stakes.
There's no standard playbook. Most organizations are learning the hard way.
Only 19 percent of respondents say that they explicitly have the expertise to conduct such audits internally⁵.
We're deploying technology we can't properly assess, using tools we don't fully understand, in environments we can't adequately secure.
Building a Mature AI Risk Strategy
This isn't about slowing innovation—it's about how you scale it with trust. Preventing security and privacy incidents that will set your program back months or worse will actually speed you up in the long run. A mature AI risk strategy doesn't inhibit progress; it enables it by creating guardrails that let you move faster with confidence
Here's what I tell organizations looking to mature their AI risk management:
Integration Into Existing Frameworks
Don't create a separate AI risk silo. Integrate AI risk management into your existing risk and compliance frameworks. Your GRC tools, your risk registers, your audit processes—they all need to account for AI-specific risks.
Nearly half (49%) of technology leaders in PwC's October 2024 Pulse Survey said that AI was "fully integrated" into their companies' core business strategy².
If AI is core to your business strategy, AI risk management must be core to your risk strategy.
Cross-Functional Alignment
AI risk isn't just a security problem or a compliance problem—it's an enterprise problem. You need alignment between security, privacy, governance, and data teams. This means breaking down silos and creating cross-functional teams that can assess and address AI risks holistically.
I've seen too many organizations where the security team doesn't know what AI models are in production, the data team doesn't understand the security implications, and the compliance team is completely in the dark. That's a recipe for disaster.
Observability Across AI Pipelines
You can't secure what you can't see. Implement observability across your AI pipelines and agent behavior. This means logging, monitoring, and alerting at every stage of the AI lifecycle—from data ingestion to model training to inference to output.
77% of organizations believe that AI will accelerate their ability to discover unprotected sensitive data, detect anomalous activity, and automatically protect at-risk data³.
Ironically, we need AI to help us manage AI risk. But this creates its own challenges—who watches the watchers?
Data Quality and Lineage
Strong data quality and lineage must come before automation begins. You need to know where your data comes from, how it's been transformed, who's touched it, and where it's going. This isn't just about compliance—it's about trust.
Without data lineage, you can't troubleshoot problems, you can't audit decisions, and you can't improve your models. You're flying blind, hoping nothing goes wrong.
The Human Element: Your First and Last Line of Defense
Here's something we don't talk about enough: You wouldn't let a new employee into production systems without onboarding. Why give AI a pass?
AI systems need the same scrutiny we apply to human users—background checks (vetting the model and its training data), onboarding (proper configuration and testing), ongoing training (continuous monitoring and updates), and performance reviews (regular audits and assessments).
But there's another crucial aspect: training your humans to work with AI safely.
42% are investing in employee training on secure AI use³.
That's not enough. Every employee touching AI needs to understand the risks, the controls, and their responsibilities.
Moving Forward: From Panic to Plan
AI is moving fast, but trust and risk management haven't kept up.
78 percent of respondents say their organizations use AI in at least one business function, up from 72 percent in early 2024 and 55 percent a year earlier⁶.
But adoption doesn't equal maturity.
AI isn't a special project anymore. It's a new class of business risk—and it needs to be treated that way. This means board-level attention, dedicated resources, and a commitment to doing it right, not just doing it fast.
Here's my advice for organizations looking to close the AI risk gap:
Start with an AI risk assessment. Not next quarter, not after the next deployment—now. Understand what AI you're using, where it's deployed, what data it touches, and what risks it creates.
Build governance and observability into your AI efforts from day one. Retrofitting security and compliance is expensive and ineffective. Design it in from the start.
Invest in your people. The technology will evolve, the risks will change, but skilled people who understand both AI and risk will remain your most valuable asset.
Accept that perfection isn't the goal. You won't eliminate all AI risks. That's not realistic. But you can understand them, manage them, and make informed decisions about which risks to accept.
Create feedback loops. When something goes wrong—and it will—learn from it. Document it. Share it. The entire industry needs to level up together.
The Bottom Line
I believe AI will transform business in ways we're only beginning to imagine. The productivity gains, the innovation potential, the ability to solve previously intractable problems—it's all real. But so are the risks.
96% of leaders believe that adopting generative AI makes a security breach more likely, yet only 24% of current generative AI projects are secured⁷.
That gap—between awareness and action—is where breaches happen, reputations die, and value evaporates.
We have a choice. We can continue the current approach—rapid adoption with minimal risk management, learning from failures, hoping the benefits outweigh the costs. Or we can do what we should have done with previous technology waves: build security, privacy, and trust into the foundation.
The organizations that get this right won't just avoid disasters. They'll build sustainable competitive advantages based on trusted AI that customers, regulators, and stakeholders can rely on. They'll move faster because they have guardrails. They'll innovate more because they understand their boundaries.
AI risk management isn't about fear. It's about enabling the future we want to build. And from where I sit, that future requires us to act now, before the gap between adoption and risk management becomes a chasm we can't cross.
The clock is ticking. What's your move?
References
- SandboxAQ AI Security Benchmark Report (2025). "AI is here, security still isn't." Help Net Security, July 30, 2025. https://www.helpnetsecurity.com/2025/07/30/report-ai-security-readiness-gap/
- PwC's 2024 US Responsible AI Survey. August 15, 2024. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html
- Microsoft Data Security Index Annual Report (2024). "Microsoft Data Security Index annual report highlights evolving generative AI security needs." Microsoft Security Blog, November 13, 2024.
- Orca Security (2024). "2024 State of AI Security Report Reveals Top AI Risks Seen in the Wild." September 18, 2024.
- KPMG US AI Risk Survey Report (2023). "Artificial Intelligence Survey 23." March 14, 2023.
- McKinsey Global Survey on AI (2024). "The state of AI: How organizations are rewiring to capture value." March 12, 2025.
- IBM Institute for Business Value (2024). "Risk Management in AI." IBM Think Insights, 2024.
Monitoring
Schema change detection
Lineage monitoring