Bridging the AI Hype Gap: Real-World Insights From Data Leaders On What It Takes To Succeed
Data leaders from JLL, Databricks, and Bigeye discussed the gap between AI ambitions and reality at our Chicago roadshow. The organizations succeeding focus on specific use cases, invest in data foundations first, and avoid trying to build custom models.

Get the Best of Data Leadership
Stay Informed
Get Data Insights Delivered
Last week, we brought together data and AI leaders in Chicago for a candid discussion about the state of AI in enterprise data operations. The conversation revealed a stark reality: there's a massive gap between AI ambitions and what companies can actually deliver on right now.
Here's what we learned, and what it means for your AI strategy.
The Expectation Problem
"I see it every day and it makes my head want to explode," said Paul Sill, Head of Visionary Insights Group at JLL. "There's a huge breakdown between what clients think AI can do for them and the reality of what their data can actually make it do."
This sentiment echoed throughout our panel. Companies are embracing new AI initiatives, but the reality doesn't match the hype. And that mismatch is creating real problems.
Raj Perumal, ISV Alliances Director at Databricks, put it this way: "A lot of people think their company is gonna build the next ChatGPT. They try that, then they get disappointed."
The pattern is clear: organizations set massive goals without understanding the practical constraints. They want transformational results in weeks, not months. They expect AI to work magic with messy, siloed data.
But here's what actually works.
What Succeeds: Focused Use Cases with Real Business Value
The panelists shared examples where AI is delivering concrete results—but they're not the flashy, everything-changing implementations you might expect.
JLL's internal ChatGPT implementation: JLL built the first commercial real estate-based generative chat engine. Not for customers, but employees. It's deployed across their global organization for tasks like taking notes on phone calls and editing emails. The key? It solves specific problems employees face every day.
HVAC optimization at scale: JLL also uses AI to make micro-adjustments to HVAC systems across the office buildings they manage. These tiny tweaks add up to millions of dollars in cost savings and energy efficiency improvements. The application works because the data was already there, the use case was concrete, and the value was measurable.
Clinical trials acceleration: One healthcare company mentioned in the discussion started using AI to analyze scans and documentation, saving 120 years of employee time in clinical trial processes. This type of application is a perfect example of leveraging AI to process massive volumes of data that humans simply can't handle at scale.
The pattern across successful implementations? They solve capacity problems. They remove the bottlenecks that prevent humans from doing their best work.
Reality Check: Your Data Foundation
Every panelist emphasized the same point: your data foundation still matters. A lot.
"I feel like we're seeing the same hype cycles," Raj noted. "Like you need your data to look good and be in one place so you can do your use cases. And people are realizing that problem has exploded with AI because not only do you need your traditional financial data, you also need your HR data and work data."
The challenge is more than technical, it's organizational. AI projects need data from across the enterprise. That means breaking down silos that have existed for years.
Jim Barker, Director of Professional Services at Bigeye, was direct: "Now is a really good time to sell your executive team on the vision of AI and then get the funding to make sure your data pipelines work right."
Use the AI hype to fix foundational problems: Smart data leaders are using AI excitement to secure budget for the unglamorous work that actually enables AI success like metadata management and pipeline reliability initiatives.
Trust and Governance Can't Be Afterthoughts
The conversation took a cautionary turn when discussing data governance. Jim shared a story from a Procter & Gamble presentation where a live ChatGPT demo accidentally revealed Kimberly Clark's strategic research directions, data that had been inadvertently uploaded by an employee.
"Any of the data we're bringing into these systems, even though ChatGPT will say they've got an enterprise model where your data will be protected—you're gonna have issues internally where somebody's gonna run a report, drop it to a spreadsheet, upload it into whatever tool of the day is."
The risks are real:
- Data classification gaps: Organizations need to know what data is public, private, restricted, or internal
- Bias concerns: Both intentional and unintentional bias in AI outputs
- Compliance requirements: State-by-state AI regulations are emerging
But governance can't slow innovation to a crawl.
Strategic Guidance for Data Leaders:
For small and mid-size companies: Start with off-the-shelf tools. Use APIs from established providers to experiment and learn. Don't try to build custom models until you've mastered the basics and have a clear, high-value use case.
Raj's advice: focus on immediate productivity gains rather than trying to build the next breakthrough model. Small companies can get significant value from 10% time savings without the complexity of custom development.
For large enterprises: Plan for longer timelines with real investment, address governance proactively and implement rigorous evaluation processes.
Jim mentioned one Fortune 10 company that introduced an "AI bullshit meter" for every business decision presented to the board. It forces teams to think through realistic expectations and prevents overpromising.
Practical Next Steps:
Based on the panel discussion, here's what data leaders should focus on:
Immediate actions:
- Audit your current data classification and governance
- Identify high-impact, low-complexity AI use cases
- Start experimenting with off-the-shelf AI tools for productivity gains
- Use AI hype to secure funding for data infrastructure improvements
Medium-term investments:
- Implement proper metadata management
- Build cross-functional alignment on data definitions and SLAs
- Establish clear policies for AI tool usage
- Create frameworks for evaluating AI project success
Long-term strategy:
- Develop internal AI capabilities gradually
- Build trust through transparent, explainable implementations
- Focus on augmenting human capabilities rather than replacing them
The Opportunity Ahead:
Despite the hype and the challenges, the panelists were optimistic about AI's potential. The key is approaching it with realistic expectations and solid foundations.
The companies that succeed with AI won't be the ones that jump on every trend or bet on revolutionary change overnight. They'll be the ones that build strong data foundations first, choose focused, high-value use cases and maintain appropriate governance and oversight, while measuring success against concrete business metrics.
The AI revolution is real. But like every technology revolution before it, success will come to organizations that combine ambitious vision with practical execution.
Join us at an upcoming Bigeye roadshow stop in a city near you. Bring your toughest AI and data questions!
Monitoring
Schema change detection
Lineage monitoring