Thought leadership
-
February 17, 2026

Webinar Recap: Insights from 400+ Enterprise AI Initiatives

5 min read

Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Stay Informed
Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Bigeye officially launched its new AI Guardian webinar series with a timely and research-backed discussion on what’s actually happening inside enterprise AI initiatives right now in 2026.

Hosted by Eleanor Treharne Jones (CEO of Bigeye), the session featured guest speakers Danielle Crop (EVP at WNS, former CDO at AmEx and Albertsons) and Sean Rogers (CEO of BARC US), who helped unpack findings from a major research study conducted by Bigeye and BARC.

The survey included 421 global enterprise respondents, with a focus on organizations that have already moved beyond AI experimentation into real project deployment.

Below are the five biggest takeaways from the session.

1. Data quality has become the #1 enterprise AI blocker

One of the most striking findings from the research is how dramatically the data quality challenge has surged as companies moved from planning to execution.

Last year, only 19% of respondents saw data quality as a major concern. This year, it jumped to 44%, making it the top obstacle to AI success.

Eleanor highlighted how revealing this shift was:

“It went from like the bottom three to the being the top concern… a powerful kind of demonstration of how priorities change from when you are thinking about something to when you are actually trying to execute.”

Sean reinforced the reality enterprises are hitting:

“Reality set in. When they started to deploy projects… they ran into a colossal roadblock around how important the quality of their data is.”

The message was clear: if organizations continue ignoring data quality issues, scaling AI will only get harder.

“If you are ignoring your data quality issues… I urge you to pump your brakes and reconsider because you're about to hit a significant roadblock.”

2. AI leaders are pulling away from the pack

A core theme of the webinar was the difference between AI “leaders” and everyone else.

Sean explained that roughly 20% of respondents were executing AI extremely well, based on performance across seven foundational categories:

  • clear AI leadership and ownership
  • standards and policies
  • governance and oversight
  • enterprise architecture readiness
  • legal/security/compliance alignment
  • data policies and access

He summarized it bluntly:

“A leader is a company who has fully deployed all seven of those areas and built a foundation for AI success.”

And he warned against the common enterprise mistake:

“We all get distracted… We scramble and do things quickly, and we likely fail quickly if we don't build a foundation.”

This takeaway framed much of the discussion: successful AI isn’t about models, but instead operational maturity.

3. Costs are rising faster than expected

The webinar also surfaced a hard truth: many organizations underestimated the cost of AI production.

Eleanor noted that cost is becoming a major issue because leaders are often pressured to treat AI as an efficiency initiative with limited upfront investment.

“Sometimes there's actually no budget attached because this is going to drive efficiency and save money.”

Danielle shared a particularly sharp insight: pilots can be misleading because they don’t reflect real compute scale.

“Pilots went really well, but when we put them into production, the compute cost was actually higher than the original people cost.”

She also pointed out why pilots fail as forecasting tools:

“If you're curating everything… you can't estimate what your compute costs are actually going to be.”

The clear recommendation: AI initiatives need cost realism early, not after rollout.

4. Agents introduce new risks at speed and scale

While generative AI adoption is accelerating, the panel emphasized that agentic AI changes the risk profile dramatically.

Sean explained that ungoverned agents can easily drift:

“When you are running agents in an unrestrained or un governed environment… you are asking for problems.”

He described a growing issue across enterprises:

“Agent execution drift is a thing… The data underneath the agent sometimes will help them drift and go off and do interesting things.”

Danielle went further, explaining how hallucinations can multiply across systems:

“One agent hallucinates a little… it feeds another agent… then the hallucination gets bigger… Think of it as a telephone game on steroids.”

Sean shared a striking anecdote:

“They think they have 20,000 agents that are outside of the restrictions of their governance models.”

The takeaway: agent adoption is exciting, but risk controls and observability are no longer optional.

5. Trust means detecting issues before your users do

The discussion repeatedly returned to one core concept: trust is what determines whether AI can scale.

Danielle offered a simple but powerful benchmark:

“Are you detecting issues before users do? … If you are not, then you have a lot of work to do.”

She emphasized that enterprises need:

“Productivity, transparency, reproducibility controls across the entire life cycle.”

Sean echoed the idea that trust isn’t a single feature — it’s the result of building the right ecosystem:

“If you're not building the foundational things, you are going to find yourself having problems.”

He also called out why many early chatbot deployments fall short:

“You all built chatbots last year… how many of you're going and ours is not very good.”

And explained why:

“You can't just go off and build a chatbot on a public model and cross your fingers and hope it works really well. It's not gonna.”

Instead, he pointed to a shift toward smaller, domain-specific models:

“Those are getting popular because they have lower cost… and you can refine them easier… That allows you to lower the number of hallucinations and raise that trust.”

Final Thoughts:

Both speakers agreed that enterprises are entering what Danielle called the “messy middle”, where foundational work becomes unavoidable.

“2026 is what I am calling the messy middle of AI… foundational infrastructure becomes even more important.”

Sean added that the debate around data quality is about to end:

“Bad data gives you bad outputs… we're sort of back to basics.”

And as the space accelerates, the pace of change itself is becoming a challenge:

“I've never had to work this fast and this hard to keep pace with the innovation technology.”

Want the Full Research?

Download the full research report here, and watch the webinar replay here.

share this episode
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

about the author

about the author

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.