Adrian Vidal
adrianna-vidal
-
March 12, 2026

Day Three Dispatch: Gartner Data & Analytics Summit 2026

7 min read

Adrian Vidal
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Join The AI Trust Summit on April 16
A one-day virtual summit on the controls enterprise leaders need to scale AI where it counts.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Of course the conversations around where to invest in AI, how fast to move, and what the roadmap should look like are important.

But the most interesting takeaway wasn’t a specific tool or framework. It was something more basic: what kind of organization actually survives a period of this much change?

Three ideas from Day Three sessions made that clear.

1. The S-curve waits for no one

Peter Hinssen described technological change through the lens of the S-curve: slow adoption at the beginning, rapid acceleration once the technology proves itself, and eventually a new normal.

We’ve seen this before. The shift from analog to digital followed exactly that pattern. Early on, companies talked about “digital strategy” as if it were a separate initiative. Eventually it stopped being a category. It was a given.

AI is moving along the same curve, and we’re entering the steep part.

Most enterprises have good reasons to move carefully. AI systems can affect core operations, customer trust, and regulatory compliance. Caution makes sense.

But there’s a difference between moving carefully and waiting too long.

A useful comparison is how organizations approached IT infrastructure during the early days of digital transformation. Companies that invested early didn’t just gain a temporary advantage. They built internal expertise and systems that made it easier to keep adapting as technology evolved.

The organizations that waited had a harder time. Every new shift required urgent investment just to catch up, leaving little room to think ahead.

The real risk wasn’t experimentation.

Hinssen put it bluntly: "If you wait until you need it, you're dead."

2. Planning for the worst-case scenario

Another idea that came up repeatedly is how much uncertainty organizations are actually planning for.

Frank Buytendijk opened his session with a worrying statistic: 62% of experts expect the world to be worse off in ten years than it is today.

That doesn’t mean those predictions will come true. But it does highlight how volatile the next decade could be.

Buytendijk walked through several scenarios that could shape the future of AI, everything from global backlash against AI-driven job disruption to governments restricting certain technologies.

Buytendijk calls this the “tyranny of pragmatism.” Organizations optimize for what they understand today. That works fine in stable environments. But when the environment changes quickly, those strategies tend to break.

The organizations that handle uncertainty better are the ones that plan for multiple possibilities, including resistance, regulation, and social backlash. The ones who avoid building strategies that only work if everything goes according to plan.

3. Learning is velocity

The third shift discussed at the summit had less to do with technology and more to do with people.

The World Economic Forum estimates that 39% of workers’ core skills will change by 2030.

Technical skills are changing faster than traditional training cycles can keep up with. The half-life of many technical skills has dropped from roughly eight to twelve years down to two to five.

As AI takes over more of the technical execution, the human advantage shifts toward direction and judgment: asking the right questions, evaluating outputs critically, and helping teams actually adopt new tools.

In other words, the most valuable employees won’t necessarily be the ones with the deepest existing expertise.

They’ll be the ones who can adopt new skills, and apply those in ever-shifting roles.

Adaptability as an operating model

Organizations that treat AI adoption as a one-time transformation project may struggle to keep up as the technology continues to evolve. The more effective approach is to design organizations that expect change and can respond to it quickly.

That means investing early enough to build experience, planning for a wider range of possible futures, and developing teams that prioritize learning over static expertise.

Capability is what gets you to the table, but adaptability is what allows organizations to stay there.

Sessions referenced: Guest keynote (Peter Hinssen, NexxWorks); "Gartner Futures Lab: Gartner's Most Maverick D&A and AI Predictions" (Frank Buytendijk); Signature Series: Predicts 2026 (Rita Sallam). All sessions from Gartner Data and Analytics Summit 2026.

share with a colleague
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

Adrian Vidal

Adrian Vidal is a writer and content strategist at Bigeye, where they explore how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, they focus on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, their work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Adrian's interest in data privacy and digital rights informs their perspective on building AI systems that organizations, and the people they serve, can actually trust.

about the author

about the author

Adrian Vidal is a writer and content strategist at Bigeye, where they explore how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, they focus on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, their work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Adrian's interest in data privacy and digital rights informs their perspective on building AI systems that organizations, and the people they serve, can actually trust.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Want the practical playbook?

Join us on April 16 for The AI Trust Summit, a one-day virtual summit focused on the production blockers that keep enterprise AI from scaling: reliability, permissions, auditability, data readiness, and governance.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.