Adrian Vidal
adrianna-vidal
Thought leadership
-
April 14, 2026

CDO and CISO: The Relationship That Makes or Breaks Enterprise AI Rollouts

5 min read

74% of IT leaders cite security and compliance as the top barrier to AI adoption, and the CISO is often cast as the reason rollouts stall. But the data says something different: organizations with comprehensive AI governance adopt agentic AI at nearly four times the rate of those with developing policies. The problem isn't the CISO. It's the absence of structure: no one owns AI risk, approval processes weren't built for AI, and neither executive has full visibility into what's happening. This article explains what's actually slowing enterprise AI down, and what the CDO and CISO need to build together to fix it.

Adrian Vidal
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Join The AI Trust Summit on April 16
A one-day virtual summit on the controls enterprise leaders need to scale AI where it counts.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Enterprise AI rollout puts two roles in an uncomfortable position. The CDO is accountable for data strategy and increasingly, AI deployment velocity. The CISO is accountable for security posture and increasingly for everything that could go wrong when AI is involved. Boards are pushing hard in both directions at the same time. And neither role owns the territory in between: who governs AI data access, who approves what tools, who gets called when something breaks.

When that middle ground is undefined, the relationship suffers. The CDO sees a security team that slows everything down. The CISO sees a data team that moves without them. Both are right about what they're experiencing. Neither diagnosis explains the actual problem. According to a 2024 Cloudera survey of 600 IT leaders, 74% cite security and compliance as the top barrier to enterprise AI adoption. But the data on which organizations actually ship AI tells a different story, one where the CDO-CISO relationship isn't the obstacle. It's the strategy.

What breaks down when the relationship isn't working

Nobody owns AI governance, and both sides know it. The CDO owns data. The CISO owns security. Neither owns AI-as-a-business-risk, the decisions that require both. Only 28% of organizations have formally defined AI governance roles (IAPP, 2024). The rest are improvising at every decision point: who approves this vendor, who classifies this dataset for AI use, who owns the incident when an agent pulls data it shouldn't have. Research from Knostic describes how it plays out in practice: "AI governance often falls in-between organizations — IT thinks security owns it, security thinks legal owns it, and legal thinks IT owns it." The result isn't that decisions get made slowly. It's that they often don't get made at all.

Part of what makes this persistent is that both roles are being measured on things that pull in opposite directions. The CDO is accountable for deployment velocity. The CISO is accountable for what goes wrong. Neither is evaluated on how well the other succeeds. Gartner's framing of the CISO's AI mandate captures the bind precisely: the CISO is expected to "exert influence over broader AI governance while leading cybersecurity governance activities." Involved enough to be accountable, not empowered enough to resolve things unilaterally. Meanwhile 70% of CDAOs say they have primary responsibility for AI strategy.

The CISO gets involved too late. AI tools get selected, piloted, and sometimes deployed before the CISO's team is in the room. Then they're handed a review request on a timeline that doesn't match the process they have. One CISO quoted in The Hacker News (April 2025) described the structural mismatch: "GRC teams get nervous when they hear 'AI' and use boilerplate question lists that slow everything down." The slowdown isn't security caution. It's a review framework designed for large software deployments being applied to a different category of tool entirely, under pressure, with no prior agreement on how it should work. According to Accenture, 37% of organizations assess AI security before deploying into production. The other 63% aren't moving fast. They're moving without the CISO, which is a different problem entirely.

Shadow AI fills the gap. When the legitimate path is unclear or slow, people find another one. A 2025 WalkMe survey found 78% of AI users bring their own tools to work, with 52% reluctant to admit it. IBM's 2025 Cost of Data Breach Report found shadow AI incidents add an average of $670,000 to breach costs. Shadow AI isn't a user behavior problem. It's what the CDO-CISO relationship looks like to the rest of the organization when it has no functional output: a gap where nobody's watching and nothing gets approved, so people stop asking.

What each side actually needs from the other

The relationship works when both roles understand what the other one is missing: not just what they're responsible for, but what they genuinely can't do without the other's help.

What the CDO needs from the CISO: a legible process, not just a rigorous one. The most common CDO frustration isn't security scrutiny itself. It's not knowing whether an initiative will clear in two weeks or six months, or what would need to change to get a faster answer. What resolves that is a CISO who has defined what approval looks like at different risk levels: a scoped copilot with limited data access on a fast-track; an agentic system with broad access to sensitive records on a full review. When those criteria exist upfront, the CDO can design toward them. When they don't, every initiative enters the same queue with no visible exit, which is exactly how the conditions for shadow AI get built.

What the CISO needs from the CDO: visibility they currently don't have. 67% of CISOs report limited visibility into how AI is being used across their organization, with none reporting full visibility (2026 CISO AI Risk Report). The CISO who's being called a blocker often can't see what they're being asked to protect. They need data classification context: which datasets are sensitive, where PII lives in the pipeline, what an agent is likely to touch, so they can make access decisions based on real information rather than cautious assumptions. And they need a CDO who treats that information-sharing as a design input, not a compliance checkbox.

EY's 2025 C-Suite Cybersecurity Study found 54% of CISOs say internal guidelines on AI responsibility are unclear, while only 20% of CEOs say the same. The executives closest to the problem know the structure is missing.

The shared blind spot, and what closes it

Here's what makes this relationship genuinely difficult: both roles are operating with incomplete information, and neither can fix that alone.

The CDO knows which datasets are supposed to feed their models. They don't always know whether an AI agent, at inference time, accessed something it wasn't supposed to, whether that data was fresh when it was used, or whether a schema change upstream broke an assumption the model was built on. The CISO knows what their access controls say. They can't see what's happening at the data layer when AI actually runs.

That shared gap, where neither side has a real-time picture of what AI is doing with data, is where the relationship has to start. It's not a CDO problem to fix or a CISO problem to fix. It's a joint infrastructure problem, and data lineage and real-time access monitoring are what close it for both roles at once. A December 2025 study by CSA and Google Cloud found that organizations with comprehensive AI governance adopt agentic AI at 46%, compared to 12% for organizations with developing policies. Mature governance organizations are nearly four times more likely to have AI in production. Governance maturity and deployment speed aren't in tension. They move together.

The Stanford Digital Economy Lab's analysis of 51 enterprise AI deployments names the mechanism: "When given a role in governance rather than simply told to approve, staff functions frequently shifted from blocking to actively supporting deployment." A role. Not a better conversation. A defined position with real accountability. When the CDO and CISO both have that, the relationship produces decisions instead of delays.

IANS Research (2026) puts the right frame on the CISO's job in this dynamic: "The security team's job isn't to prevent AI adoption — it's to create safe channels for it." That's true. But those channels don't exist until the CDO gives the CISO what they need to build them: visibility into what data AI is actually using, a shared understanding of where the boundaries are, and a seat at the table before the architecture is set.

Why agentic AI makes this relationship more urgent, not less

Most of the CDO-CISO friction described above predates agentic AI. It's been simmering since the first enterprise copilot rollout. But agents change the stakes in a specific way that makes resolving it a more pressing priority than it was two years ago.

A copilot has a human in the loop. A person asks a question, reads the output, and decides what to do with it. The blast radius of a bad data access decision is bounded by that review step. An agent doesn't wait. It receives a task, traverses whatever data sources it has access to, makes a sequence of decisions, and produces an output, often without a human seeing any intermediate step. By the time anyone reviews the result, the data access has already happened. The Stanford Enterprise AI Playbook describes what this looks like in practice: "Data left through a convenience tool that bypassed every approval process the organization had. The same week, the engineering team deployed a custom AI assistant with direct access to their CRM and contract database. No security review. No runtime monitoring."

That's not a security failure in the traditional sense. It's a relationship failure. The CDO didn't know what the engineering team had provisioned. The CISO didn't know it was running. Neither had a shared view of what the agents were doing. The governance relationship that would have caught it wasn't there: no early CISO involvement, no shared visibility infrastructure, no defined access scope. For copilots, an organization can often recover from that gap after the fact. For agents operating at machine speed across sensitive data, the window is much smaller.

IANS Research (2026) puts the right frame on the CISO's job in this dynamic: "The security team's job isn't to prevent AI adoption — it's to create safe channels for it."

share with a colleague
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

Adrian Vidal

Adrian Vidal is a writer and content strategist at Bigeye, where they explore how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, they focus on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, their work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Adrian's interest in data privacy and digital rights informs their perspective on building AI systems that organizations, and the people they serve, can actually trust.

about the author

about the author

Adrian Vidal is a writer and content strategist at Bigeye, where they explore how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, they focus on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, their work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Adrian's interest in data privacy and digital rights informs their perspective on building AI systems that organizations, and the people they serve, can actually trust.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Want the practical playbook?

Join us on April 16 for The AI Trust Summit, a one-day virtual summit focused on the production blockers that keep enterprise AI from scaling: reliability, permissions, auditability, data readiness, and governance.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.