Get the Best of Data Leadership
Stay Informed
Get Data Insights Delivered
Bigeye officially launched its new AI Guardian webinar series with a timely and research-backed discussion on what’s actually happening inside enterprise AI initiatives right now in 2026.
Hosted by Eleanor Treharne Jones (CEO of Bigeye), the session featured guest speakers Danielle Crop (EVP at WNS, former CDO at AmEx and Albertsons) and Sean Rogers (CEO of BARC US), who helped unpack findings from a major research study conducted by Bigeye and BARC.
The survey included 421 global enterprise respondents, with a focus on organizations that have already moved beyond AI experimentation into real project deployment.
Below are the five biggest takeaways from the session.
1. Data quality has become the #1 enterprise AI blocker
One of the most striking findings from the research is how dramatically the data quality challenge has surged as companies moved from planning to execution.
Last year, only 19% of respondents saw data quality as a major concern. This year, it jumped to 44%, making it the top obstacle to AI success.
Eleanor highlighted how revealing this shift was:
“It went from like the bottom three to the being the top concern… a powerful kind of demonstration of how priorities change from when you are thinking about something to when you are actually trying to execute.”
Sean reinforced the reality enterprises are hitting:
“Reality set in. When they started to deploy projects… they ran into a colossal roadblock around how important the quality of their data is.”
The message was clear: if organizations continue ignoring data quality issues, scaling AI will only get harder.
“If you are ignoring your data quality issues… I urge you to pump your brakes and reconsider because you're about to hit a significant roadblock.”
2. AI leaders are pulling away from the pack
A core theme of the webinar was the difference between AI “leaders” and everyone else.
Sean explained that roughly 20% of respondents were executing AI extremely well, based on performance across seven foundational categories:
- clear AI leadership and ownership
- standards and policies
- governance and oversight
- enterprise architecture readiness
- legal/security/compliance alignment
- data policies and access
He summarized it bluntly:
“A leader is a company who has fully deployed all seven of those areas and built a foundation for AI success.”
And he warned against the common enterprise mistake:
“We all get distracted… We scramble and do things quickly, and we likely fail quickly if we don't build a foundation.”
This takeaway framed much of the discussion: successful AI isn’t about models, but instead operational maturity.
3. Costs are rising faster than expected
The webinar also surfaced a hard truth: many organizations underestimated the cost of AI production.
Eleanor noted that cost is becoming a major issue because leaders are often pressured to treat AI as an efficiency initiative with limited upfront investment.
“Sometimes there's actually no budget attached because this is going to drive efficiency and save money.”
Danielle shared a particularly sharp insight: pilots can be misleading because they don’t reflect real compute scale.
“Pilots went really well, but when we put them into production, the compute cost was actually higher than the original people cost.”
She also pointed out why pilots fail as forecasting tools:
“If you're curating everything… you can't estimate what your compute costs are actually going to be.”
The clear recommendation: AI initiatives need cost realism early, not after rollout.
4. Agents introduce new risks at speed and scale
While generative AI adoption is accelerating, the panel emphasized that agentic AI changes the risk profile dramatically.
Sean explained that ungoverned agents can easily drift:
“When you are running agents in an unrestrained or un governed environment… you are asking for problems.”
He described a growing issue across enterprises:
“Agent execution drift is a thing… The data underneath the agent sometimes will help them drift and go off and do interesting things.”
Danielle went further, explaining how hallucinations can multiply across systems:
“One agent hallucinates a little… it feeds another agent… then the hallucination gets bigger… Think of it as a telephone game on steroids.”
Sean shared a striking anecdote:
“They think they have 20,000 agents that are outside of the restrictions of their governance models.”
The takeaway: agent adoption is exciting, but risk controls and observability are no longer optional.
5. Trust means detecting issues before your users do
The discussion repeatedly returned to one core concept: trust is what determines whether AI can scale.
Danielle offered a simple but powerful benchmark:
“Are you detecting issues before users do? … If you are not, then you have a lot of work to do.”
She emphasized that enterprises need:
“Productivity, transparency, reproducibility controls across the entire life cycle.”
Sean echoed the idea that trust isn’t a single feature — it’s the result of building the right ecosystem:
“If you're not building the foundational things, you are going to find yourself having problems.”
He also called out why many early chatbot deployments fall short:
“You all built chatbots last year… how many of you're going and ours is not very good.”
And explained why:
“You can't just go off and build a chatbot on a public model and cross your fingers and hope it works really well. It's not gonna.”
Instead, he pointed to a shift toward smaller, domain-specific models:
“Those are getting popular because they have lower cost… and you can refine them easier… That allows you to lower the number of hallucinations and raise that trust.”
Final Thoughts:
Both speakers agreed that enterprises are entering what Danielle called the “messy middle”, where foundational work becomes unavoidable.
“2026 is what I am calling the messy middle of AI… foundational infrastructure becomes even more important.”
Sean added that the debate around data quality is about to end:
“Bad data gives you bad outputs… we're sort of back to basics.”
And as the space accelerates, the pace of change itself is becoming a challenge:
“I've never had to work this fast and this hard to keep pace with the innovation technology.”
Want the Full Research?
Download the full research report here, and watch the webinar replay here.
Monitoring
Schema change detection
Lineage monitoring
.png)


