Adrianna Vidal
adrianna-vidal
Thought leadership
-
December 4, 2025

The Data Quality Crisis Killing AI Projects (and Other Hard Truths.)

5 min read

Data quality issues more than doubled as the top obstacle this year (44% in 2025 vs 19% in 2024), forcing organizations to confront the reality that unreliable data becomes risk at scale. Cost management jumped from 7th most important success metric to 2nd place, with software expenses exceeding expectations for 51% of organizations, signaling the end of "whatever it takes" AI spending. Meanwhile, only 20% of companies have built proper foundations to support their projects.

Adrianna Vidal
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Stay Informed
Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

The AI honeymoon period is officially over.

While headlines continue to trumpet AI's transformative potential, a new research study from BARC reveals a more nuanced reality: organizations deploying AI in production are facing fundamentally different challenges than they anticipated. The gap between AI experimentation in 2024 and the production realities of 2025 tells a story of rapid growth and maturation, with some hard-learned lessons.

BARC's "Lessons From the Leading Edge" study, which surveyed 421 global respondents about their AI production experiences, reveals three critical shifts that are reshaping how enterprises approach AI strategy. For data and AI leaders navigating this evolving landscape, these findings offer both warnings and roadmaps for sustainable AI success.

Finding #1: Data Quality Has Become the Primary AI Blocker

The most striking finding in BARC's research is the dramatic rise of data quality as an obstacle to AI success.

In 2024, only 19% of organizations cited data quality issues as a top challenge. By 2025, that number had more than doubled to 44%, making it the single biggest reported roadblock to AI project success.

The classic "garbage in, garbage out" principle still applies, but is now multiplied across dozens of agents. Problems that seemed manageable in smaller-scale implementations become critical bottlenecks when rolled out company-wide, or to customers.

"As more projects were delivered, data quality rose to the number one obstacle," the research notes. "Poor data quality impacts context of outputs and certainly accuracy."

For data leaders, this finding validates what many have suspected: that AI's success hinges on data foundation work that often gets deprioritized in favor of flashier initiatives. But it also presents an opportunity. Organizations that have used this time to invest in comprehensive data observability and data quality management are now positioning themselves for AI success while competitors continue to struggle with unreliable data.

The regional data tells an even more pointed story: European respondents rank data quality as their top obstacle by a margin of 15 percentage points over their North American peers, while North Americans struggle more with integration issues. This disparity may reflect regulatory pressures, as European organizations operate under GDPR and face emerging AI governance requirements that make data quality failures more costly.

As AI regulations like the EU AI Act expand globally, this could be a preview of the data quality challenges that organizations worldwide will face when compliance requirements intensify beyond current standards.

Finding #2: The Great AI Cost Reality Check

The second major shift revealed in BARC's research centers on cost management.

In 2024, cost ranked as the 7th most important success metric for AI projects. By 2025, it had jumped to second place, with 30% of respondents citing cost management as a key measure of AI project success.

This dramatic prioritization reflects a broader industry maturation from "whatever it takes, get AI deployed" thinking to more measured, sustainable approaches. The research reveals where these unexpected costs are hitting hardest:

  • 51% report software costs exceeded expectations
  • 43% faced unexpected validation and quality control expenses
  • 42% encountered higher people and training costs

Perhaps most concerning, organizations are responding to cost pressures by limiting project scope (42% of respondents) and implementing phased delivery approaches (38%). While these can be smart strategies, they also signal that many AI initiatives may be falling short of their original ambitious goals due to budget constraints.

For AI leaders, this cost reality check represents both a challenge and a strategic opportunity. Organizations that can demonstrate clear ROI and cost-effective AI deployment will have significant competitive advantages. The research suggests that AI leaders—those organizations with mature foundational capabilities—are still struggling with costs at similar rates to other organizations, indicating that even the most prepared companies are grappling with AI economics.

This cost pressure is driving a shift toward more practical approaches: 26% of organizations are adopting smaller, more targeted models, and 25% are leveraging free open-source technologies where appropriate. The key insight for data leaders is that sustainable AI strategies require upfront cost modeling and realistic budget planning that accounts for the full lifecycle of AI projects.

Finding #3: The Shift to Measured Approaches

The third critical finding reveals a fundamental change in how organizations approach AI strategy.

While 80% of organizations continue to rush into AI without establishing foundational capabilities, the 20% that have built proper foundations are dramatically outperforming their peers.

BARC's research identifies "AI leaders" (organizations that have formalized capabilities across seven foundational areas including leadership, governance, architecture, security, and data access policies.) These leaders are nearly twice as likely to have more than five projects in production (53% compared to 28% for non-leaders) and three times as likely to have 10 or more projects (30% vs. 10%).

Interestingly, the percentage of organizations achieving AI leadership status has remained relatively flat across multiple BARC studies, suggesting this isn't simply a maturity timeline issue, it's a strategic choice. Most organizations continue to prioritize quick wins over foundational investment.

The implications extend beyond project volume. AI leaders are approaching responsible AI differently, with shifting priorities that reflect production realities rather than theoretical concerns. Privacy and compliance concerns are rising among production deployments, while theoretical issues like human-AI collaboration are declining in priority.

This suggests that successful AI deployment requires a measured approach that balances innovation speed with proper foundations. Organizations that invest in governance, data quality, and architectural requirements upfront are positioned to scale AI more effectively than those that rush into deployment.

What This Means for Data and AI Leaders

These three findings collectively paint a picture of an industry in transition from AI experimentation to AI production. The organizations that will succeed in this next phase are those that recognize AI as an operational discipline requiring the same rigor as any other enterprise system.

For data leaders specifically, this research validates the critical importance of data infrastructure investment. As AI models become more sophisticated and AI applications become mission-critical, the quality and governance of underlying data becomes paramount. Organizations that treat data observability, lineage, and governance as foundational to AI strategy—rather than afterthoughts—will have sustainable competitive advantages.

The cost pressures revealed in the research also underscore the need for data leaders to demonstrate clear business value from data investments. AI initiatives that can't show measurable ROI will face increased scrutiny in 2026 and beyond.

Most importantly, the research suggests that the window for building proper AI foundations is closing rapidly. As AI becomes table stakes across industries, the organizations that invested early in proper governance, data quality, and architectural foundations will be positioned to scale AI effectively while others struggle with fundamental quality and cost challenges.

The message from BARC's research is clear: the AI revolution is real, but sustainable success belongs to organizations that combine ambitious vision with practical, foundational execution.

Ready to dive deeper into the data? BARC's complete "Lessons From the Leading Edge" research provides detailed analysis across all aspects of enterprise AI deployment. Download the full report to access comprehensive insights that can guide your AI strategy in 2025 and beyond.

share this episode
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

Adrianna Vidal

Adrianna Vidal is a writer and content strategist at Bigeye, where she explores how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, she focuses on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, her work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Her interest in data privacy and digital rights informs her perspective on building AI systems that organizations, and the people they serve, can actually trust.

about the author

about the author

Adrianna Vidal is a writer and content strategist at Bigeye, where she explores how organizations navigate the practical challenges of scaling AI responsibly. With over 10 years of experience in communications, she focuses on translating complex AI governance and data infrastructure challenges into actionable insights for data and AI leaders.

At Bigeye, her work centers on AI trust: examining how organizations build the governance frameworks, data quality foundations, and oversight mechanisms that enable reliable AI at enterprise scale.

Her interest in data privacy and digital rights informs her perspective on building AI systems that organizations, and the people they serve, can actually trust.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.