Adrianna Vidal
adrianna-vidal
Thought leadership
-
March 9, 2026

Day One Dispatch: Gartner Data & Analytics Summit 2026

8 min read

Adrianna Vidal
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Join The AI Trust Summit on April 16
A one-day virtual summit on the controls enterprise leaders need to scale AI where it counts.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

There's no shortage of sessions at Gartner Data and Analytics Summit 2026. We zeroed in on the ones we thought mattered most for data leaders right now. Here's what stood out from Day 1.

1. Caution isn't safety, it's a strategy failure

The most consequential argument at this year's Summit came from Rita Sallam in "Signature Series: Get Ready for Data and Analytics 2030." She frames three archetypes of AI ambition: AI-cautious, AI-opportunistic, and AI-first, and makes a case that most organizations aren't fully prepared for: by 2030, being AI-cautious is the highest-risk position in the market, not the safest.

AI-first organizations spend up to 4x more of their revenue on data, governance, and people foundations. That investment compounds. The performance gap between AI-cautious and AI-first organizations won't appear all at once, it widens incrementally until, by 2030, it becomes structural and very difficult to close. Sallam's point isn't that every organization needs to move recklessly. It's that treating caution as a responsible strategy is itself a bet, and moving into 2023, it will be a losing one.

Arun Chandrasekaran's session, "Advance Your AI Strategy for Success in an Era of Constant Change," puts numbers on the maturity picture. Only 17% of organizations have high AI maturity today. 30% are in the low-maturity bucket.

The opening keynote, "Navigate AI on Your Data and Analytics Journey to Value" by Adam Ronthal and Georgia O'Callaghan, frames it in the starkest terms: "If you don't lead AI, AI will lead you."

One action: Map your organization honestly against the three archetypes. If you're AI-cautious by deliberate design, based on a real assessment of your constraints, that's defensible for now. But you'll need to make a shift to stay competitive moving forward.

2. Context is the new competitive moat

As foundation models become cheaper and more accessible, a question gets more urgent: if everyone has access to roughly the same models, what differentiates your AI outcomes? Sallam's answer is: context. Specifically, context as critical infrastructure.

The logic follows from how AI models actually work. A model without context is a general-purpose tool. A model with deep, accurate, well-structured organizational context is something closer to a specialized expert. Your business knowledge, your data, your domain-specific metadata, none of that is available off the shelf. It can't be purchased from a model provider. It's the one layer of the AI stack that's genuinely proprietary to you. The model is table stakes. The context is the moat.

Mark Beyer's session, "Using Active Metadata to Support Data Agents for AI," makes this concrete at the technical level. His central insight is both obviousand underappreciated in practice: agents only know what's documented and present. They have no real-world context, only data. If the metadata isn't there, the agent can't reason about it. If it's stale or passive, the agent can't adapt to it.

One action: Audit the context layer in your data infrastructure. Not just what you're cataloging, but what's actively maintained, event-driven, and agent-readable. If your metadata strategy was designed for human analysts, you'll need to start making changes to support AI agents as data consumers, too.

3. Agentic AI is about to rewrite how work gets organized

Here's the number from Sallam's session that deserves more attention: in 2025, 81% of IT work was done by humans without AI involvement. Gartner's prediction is that by 2030, that number approaches 0%. Every significant piece of IT work will involve AI.

That's a fundamental redesign of how human and machine labor get allocated. And it raises an organizational question: if the composition of work is changing this dramatically, is the structure of your organization built to navigate that change?

Mike Rollings' session, "How to Design the AI Organization," is the most direct address of this question. "AI won't transform your enterprise unless your AI organization is structurally built to scale it", he said. The warning signs he identifies are worth knowing because they're common: an AI leader without enterprise-wide scope, an AI council that convenes without generating value, and AI use cases that are predominantly productivity improvements rather than business model changes. These aren't signs of bad leadership. They're signs of a structural design that was built for AI experimentation, not AI transformation.

What the data shows about what actually works: 96% of high-maturity organizations have a dedicated AI leader, with 73% of dedicated AI leaders in high-maturity organizations reporting directly to the CEO. And the role isn't about owning the AI team, it's about building AI capability across the entire enterprise. Rollings describes the AI leader as a capability builder, not an empire builder.

Sallam's human-agent collaboration shift connects directly here. By 2030, Gartner sees 25% of work done entirely by AI and 75% done by humans working with AI. Designing that operating model, figuring out which decisions require human judgment, which can be delegated to agents, and how the interface between them gets governed, is the next challenge leaders will face.

One action: Assess whether your organization has a dedicated AI leader, and if that AI leader has genuine leadership scope. The distinction is one of a few that determines whether AI can scale successfully in your enterprise.

4. The governance question has changed

Andrew White and Lauren Kornutick structured their session, "Data and Analytics Governance vs. AI Governance," as a deliberate debate: does D&A governance absorb AI governance, or does AI governance become its own distinct discipline that directs D&A governance? The resolution they land on is more nuanced than either position suggests.

Traditional data governance asked "is this data fit for purpose?", a largely technical question about quality, lineage, and access. AI governance asks "should we be making this decision with AI at all, and under what constraints?" That's a question about judgment, appropriateness, and responsibility. The organizational muscles required to answer it are different, the accountability structures are different, and the stakeholders are different.

Most existing governance programs were built for the first question. Organizations that think they're doing AI governance because they have data quality checks and a catalog. But they're really doing data governance, which is necessary but not entirely sufficient. The good news is that the path forward doesn't require starting from scratch.

One action: Test whether your governance program can answer this question clearly: "Should we be using AI for this decision, and under what constraints?" If the honest answer is "we'd have to figure that out," that's worth investigating.

Designing for adaptation

Most organizations are treating AI as the variable and their own structure as the fixed point. AI is something to route around, adapt to, and ultimately manage. These sessions collectively make the opposite case: AI is the constant pressure. Your organization is what needs to move.

That's a different kind of challenge than most change management frameworks are built for. Traditional change management assumes you're moving from a known current state to a defined future state. AI doesn't offer one. It keeps morphing, which means the organizations that come out ahead won't be the ones that found the right answer once. They'll be the ones built to keep finding it.

Chandrasekaran calls this "resilient agility" in the context of AI strategy. A Mars rover isn't built for a known road. It's built for terrain that changes under it. Each wheel operates independently at whatever speed and traction the ground requires. Direction adjusts in real time based on what the sensors pick up. And it can keep moving without waiting for instructions from central command.

That's the organizational model these sessions were pointing toward. Not a fixed structure, but a flexible design that can respond to the shifting ground beneath us.

Sessions referenced: "Navigate AI on Your Data and Analytics Journey to Value" (Adam Ronthal and Georgia O'Callaghan); "Signature Series: Get Ready for Data and Analytics 2030" (Rita Sallam); "Advance Your AI Strategy for Success in an Era of Constant Change" (Arun Chandrasekaran); "How to Design the AI Organization" (Mike Rollings); "Data and Analytics Governance vs. AI Governance" (Andrew White and Lauren Kornutick); "Using Active Metadata to Support Data Agents for AI" (Mark Beyer). All sessions from Gartner Data and Analytics Summit 2026.

share with a colleague
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

Adrianna Vidal

Adrianna Vidal is a writer and content strategist at Bigeye, where she explores how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, she focuses on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, her work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Her interest in data privacy and digital rights informs her perspective on building AI systems that organizations, and the people they serve, can actually trust.

about the author

about the author

Adrianna Vidal is a writer and content strategist at Bigeye, where she explores how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, she focuses on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, her work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Her interest in data privacy and digital rights informs her perspective on building AI systems that organizations, and the people they serve, can actually trust.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Want the practical playbook?

Join us on April 16 for The AI Trust Summit, a one-day virtual summit focused on the production blockers that keep enterprise AI from scaling: reliability, permissions, auditability, data readiness, and governance.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.