Adrianna Vidal
adrianna-vidal
Thought leadership
-
March 10, 2026

Day Two Dispatch: Gartner Data & Analytics Summit 2026

8 min read

Adrianna Vidal
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Join The AI Trust Summit on April 16
A one-day virtual summit on the controls enterprise leaders need to scale AI where it counts.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Our Day 1 recap covered the strategic frame, what the AI moment requires and how organizations need to be positioned for it. Day 2 got into the execution layer. Here's what we took from it.

1. You can't track value you haven't defined

Alan Duncan's session opened with a figure: 83% of CFOs report that less than half of their data, analytics, and AI investments have delivered financial results. Eighty percent of leaders say they struggle to track value at all.

Gartner predicts that by 2027, 75% of CDAOs will face significant value realization challenges. Duncan's framing is useful here. Value isn't a number you arrive at after a project ships. It's a hypothesis you define at the start, test in increments, and track with leading indicators. The organizations with the clearest outcomes from AI are ones that started with a specific value story, connected to organizational purpose, mapped to business outcomes, and owned by someone with both the authority and the incentive to measure it.

Duncan's Data-to-Value Framework is specific about what this means in practice: link the AI capability to a value driver (cost reduction, revenue growth, NPS improvement, churn reduction), then define both the lagging metric that will confirm success and the leading indicators (adoption rate, data quality scores, model accuracy) that will tell you whether you're on track before you can see the outcome.

One action: Define how you'll track value before the next AI deployment starts, not after. That means identifying leading indicators now.

2. Skill erosion is already happening

Arun Chandrasekaran's session on generative AI's invisible undercurrents makes one argument above all others: the blind spots that will do the most damage aren't the dramatic ones. And the pace of AI deployment makes them inevitable.

One that's top of mind for CDAOs is skills erosion. As GenAI takes on more routine work, people naturally stop exercising the judgment that makes them effective on the non-routine work. It's not a failure of intent, it's what happens when the muscle isn't needed as often. Chandrasekaran's framing: "GenAI should amplify human genius, not erode it." By 2030, 30% of enterprises will face measurable degradation in decision-making quality as a direct result of over-reliance on AI. And it won't surface until the judgment is needed and isn't there.

One action: Identify the workflows where GenAI has most replaced human judgment rather than augmenting it. Those are where skills erosion will advance the fastest, and where deliberate practice or human-in-the-loop design needs to be built back in before the capability is gone.

3. Politics is affecting adoption

95% of GenAI pilots yield no measurable return. That's from MIT's GenAI Divide research, cited by Frank Buytendijk, and his session shows that the statistic is more a symptom than the core issue.

AI changes who has information, who controls decisions, and who can act without asking. When that shifts, the response is rarely visible disagreement. Buytendijk describes it as silent subversion: projects get quietly undermined, timelines slip without stated reasons, tools get deployed and gradually stop being used. "People don't fear AI as much as their position in the world."

That last point is the connection to something Chandrasekaran surfaces in his blind spots session: deployment doesn't equal adoption. Organizations treat a successful deployment as the finish line, but the actual value requires people to use the tools, use them consistently, and use them well. The adoption gap, the distance between "we deployed this" and "people are actually using it", is what silent subversion looks like in aggregate. Low usage numbers don't automatically mean the tool is bad. They can mean the people who need to use it have reasons not to.

One action: Track adoption and deployment as separate metrics. If you're only measuring whether a tool shipped, you can't watch for underperformance, and you can't address what you're not measuring.

4. Cost tolerance is quickly running out

Rita Sallam's session on the value and cost of AI agents laid out an actionable plan. The Defend/Extend/Upend framework maps AI use cases along two axes (complexity and cost) and makes visible a mismatch that's responsible for many agentic AI project failures.

Defend deployments (task-specific improvements to maintain competitive parity, think coding assistants, Microsoft Copilot, basic automation) carry the lowest cost and complexity profile. Extend deployments embed AI into existing processes for differentiation (GenAI in customer service that recommends upsell, agents handling tier-one support.) Upend deployments create new products, markets, or core processes  (pharmaceutical drug discovery, full business process automation.) The cost difference between Defend and Upend isn't incremental; it's an order of magnitude.

The failure pattern Sallam identifies: 70% of agentic AI use cases will fail to deliver expected value, and the primary cause is wrong cost models at the outset. Organizations build business cases for Upend outcomes on Defend cost assumptions. By 2027, 60% of large enterprises will struggle with cost sprawl and surprise overruns from agentic AI. The inflection point is usually mid-implementation, when the real cost structure becomes visible and the ROI case completely collapses.

One action: Before the next agentic AI investment is approved, map it to Defend, Extend, or Upend explicitly, and validate the cost model against that category, not against a headcount substitution estimate.

The threats growing in silence

Many organizations are bracing for AI to fail in the obvious ways: hallucinated outputs, compliance violations that surface in a news cycle. Those are real risks. They're also the ones with the most attention and understanding around them.

The risks with the longest tail aren't the ones that announce themselves. Skills erosion happens gradually in organizations that are, by any external measure, deploying AI successfully. Political resistance to adoption shows up as vague implementation delays, not as a stated objection. Cost model failures compound quietly through mid-year, then surface as a canceled roadmap.

Chandrasekaran called it clearly: the AI bullet train makes blind spots inevitable. The organizations that understand that aren't trying to eliminate all blind spots. They're building the structures (adoption metrics, cost model discipline, power-aware change management) that catch the silent failures before they become business-toppling ones.

Sessions referenced: "How to Realize Value From Data, Analytics and AI: The Journey" (Alan Duncan); "Generative AI's Invisible Undercurrents: 10 Blind Spots CDAOs Aren't Watching But Should" (Arun Chandrasekaran); "Mastering the Power Politics of Artificial Intelligence" (Frank Buytendijk); "How to Calculate the Value and Cost of AI Agents" (Rita Sallam). All sessions from Gartner Data and Analytics Summit 2026.

share with a colleague
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

Adrianna Vidal

Adrianna Vidal is a writer and content strategist at Bigeye, where she explores how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, she focuses on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, her work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Her interest in data privacy and digital rights informs her perspective on building AI systems that organizations, and the people they serve, can actually trust.

about the author

about the author

Adrianna Vidal is a writer and content strategist at Bigeye, where she explores how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, she focuses on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, her work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Her interest in data privacy and digital rights informs her perspective on building AI systems that organizations, and the people they serve, can actually trust.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Want the practical playbook?

Join us on April 16 for The AI Trust Summit, a one-day virtual summit focused on the production blockers that keep enterprise AI from scaling: reliability, permissions, auditability, data readiness, and governance.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.