Thought leadership
-
October 15, 2025

In Conversation with EY Park and Rahim Hajee: The 4 Pillars of Enterprise AI Success

5 min read

Eleanor Treharne-Jones
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Stay Informed
Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

When we brought our No Bad Data roadshow to Toronto, the goal wasn’t to rehash what everyone already says about AI. We wanted to dive into exactly what successful enterprise AI adoption actually looks like inside large, risk-aware organisations.

On stage with me were EY Park, who leads data analytics and AI at University Pension Plan Ontario, and Rahim Hajee, CEO of Adastra North America, an AI and data consultancy.

Over the course of an hour, a clear framework began to take shape. Without ever naming it as such, both speakers kept returning to what I’ve come to think of as four pillars to enterprise AI success: Focus, Teamwork, Data Confidence, and Change Management.

Here’s how that conversation unfolded, and what these pillars really mean when you’re trying to make AI useful, reliable, and trusted within your company.

Pillar 1: Focus 

“We went from experimenting with a few automated scripts to the board suddenly asking, ‘What’s our AI strategy?’”

That sudden shift from curiosity to expectation is something I’ve heard echoed in nearly every city on this roadshow. Boards, executives, and even customers want to know what the plan is, even while the technology itself is still evolving.

EY’s team chose not to chase everything at once. “We built an AI usage framework — what we’ll use AI for, and what we won’t,” she explained. “Then we picked four key business areas and committed to delivering something tangible in each. We weren’t going to boil the ocean.”

The magic number, as it turned out, was four. “We had the same thing,” Rahim said, smiling. “Four initiatives, because if you try to tackle ten, you end up proving none. Focus gives you permission to say no to the noise.”

What struck me most was how both of them spoke about maturity not as a stage of sophistication, but as a discipline of prioritisation. Rahim put it plainly: “AI maturity isn’t a black box, it’s treating AI like any other strategic capability. You start with business value, you measure, you iterate.”

EY shared how they identified those first four opportunities by looking at where value leaks existed already: repetitive reporting, process bottlenecks, and places where small time savings would build confidence quickly. “You can’t show impact unless you know where you started,” she told the room. “Even if it’s two hours saved on a recurring task, that’s your proof point. It’s what gets people to believe.”

Listening to them, I was reminded that “focus” isn’t the opposite of ambition; it’s what turns ambition into action. The organisations that succeed in adopting AI simply decide where to begin and commit to delivering something of real business value.

EY described it as being “very tactical” in year one. “We treated it like a project plan. Four business areas, each with a clear owner, and we measure progress against that. The momentum matters as much as the output.”

Start small, yes, but start measurably. Build confidence, then scale. As EY put it with quiet finality, “That’s how you build trust, by delivering something that actually works.”

Pillar 2: Teamwork 

If “focus” was about choosing where to begin, “teamwork” was about who comes with you.

Rahim put it plainly: “AI maturity isn’t just about technology readiness; it’s about organisational readiness.” The companies making real progress, he said, are the ones that “get legal, risk, IT, and business all focused around the same mission.”

It’s a deceptively simple point, but it changes everything. Teams that have never had to work closely together suddenly find themselves sharing ownership of the same system. The language of risk, of data, of innovation, all has to meet somewhere in the middle.

At EY’s organisation, making that shift has been intentional. “We mobilised a delivery team for year one,” she said. “We have a dedicated comms person, weekly updates, and even internal hackathons. We bring interns, engineers, and business teams together to experiment.” What struck me most wasn’t the process, but the humility in how she described it: “We’re still learning. We have two AI tools in-house, and sometimes two teams end up testing the same thing. That’s fine. It’s how we find what works.”

Rahim shared a similar view. “Experimentation only works when you have the right guardrails in place,” he said. “We set some policies, gave everyone training, and then let them play. Report back, share what you learned. That’s how you find the next phase of value.”

EY’s team has even gone so far as to offer “white-glove” sessions with executives. “We sit with them, one-to-one,” she said. “Sometimes it’s as simple as showing someone how to re-write an email with AI. Once they see the impact, they want to explore more.”

That top-down curiosity matters as much as the bottom-up innovation. It tells people across the organisation that learning is safe — that it’s acceptable to try, fail, and ask questions out loud.

Pillar 3: Data Confidence

Both speakers agreed: AI can only ever be as good as the data it depends on.

“You can’t automate trust,” EY said. “You have to build it, and that starts with the data.”

For her, that confidence begins with clarity. “Every dataset we use, we should be able to explain where it came from, what’s been done to it, and why,” she said. “If we can’t, we don’t use it.” It’s a discipline, not a checklist — a way of giving the business the same assurance over data that they expect from financial reporting or audit trails.

She laughed recalling one of her team’s early tests. “We turned on a new AI dashboard, and half of it came out in French,” she said. “It was a reminder that models will do exactly what you tell them — but not always what you mean.”

Rahim picked up on that. “When data quality isn’t there, your results just won’t make sense,” he said. “That’s why enterprises need observability — not just over the models, but the data feeding them. You have to trace every step and prove what you did.”

He calls this a data-trust pipeline: “We used to talk about data pipelines,” he explained. “Now we talk about data-trust pipelines, systems that make quality, lineage, and accountability visible from start to finish.”

EY described a similar approach in her organisation. Every new AI initiative goes through impact assessments and risk reviews, not to slow progress, but to make it repeatable. “It’s about knowing what’s actually in play,” she said. “That’s what gives people confidence to use it.”

And that, really, is the point: confidence is measurable. You can see it in consistent inputs, in outputs that can be explained, and in processes that stand up to scrutiny.

When enterprises talk about “trustworthy AI,” this is what they mean. Not blind faith, but a certainty earned through transparency and verification.

Pillar 4: Change Management

“The technology’s the easy part,” EY said, smiling a little as if she’d learned that the hard way. “It’s the people that make or break it.”

Her team has learnt that progress depends as much on empathy as on engineering. They’ve run hackathons and internal “show-and-tell” sessions, pairing interns who naturally experiment with long-serving colleagues who are still hesitant. “Sometimes it’s just sitting down with someone and showing them how AI can rewrite an email or tidy a slide deck,” she said. “Small wins matter. Once people see it, they start to imagine where else it could help.”

Rahim has watched the same pattern across industries. “Don’t promote AI as the guru,” he told the room. “Treat it like an intelligent intern, someone who can take work off your plate but still needs guidance.” It’s a framing that removes fear. Employees stop worrying that AI will replace them and start asking how it can make their day easier.

EY talked about her team’s “white-glove” support for executives: sitting beside them, helping them test ideas in real time. “We realised that adoption starts at the top,” she said. “If leaders use it, others follow.”

Still, she admitted the numbers. “About 85 percent of our people are using it regularly,” she said, “but there’s that last 15 percent who just won’t open it. They’re capable, they’re curious, they just don’t want to change.”

The resistance isn’t usually technical; it’s emotional. People need proof that the tools won’t make them slower, or exposed, or replaceable. And that proof only comes from experience.

As Rahim put it later, “The people who use AI won’t necessarily replace anyone, but the people who learn how to use it well will be the ones who move fastest.”

Alignment with the Industry 

Listening to EY and Rahim in Toronto, I couldn’t help noticing how closely their instincts aligned with what researchers have been mapping out for years.

The MIT Center for Information Systems Research, for instance, describes enterprise AI maturity across four stages, from experimental pilots to fully integrated, value-generating systems. Their research identifies four capabilities that separate the leaders from the rest: a robust data foundation, governance discipline, cross-functional operating models, and a culture of continuous learning.

Deloitte’s 2024 State of AI in the Enterprise study echoes that pattern, highlighting that the highest-performing organisations invest early in AI strategy, data infrastructure, talent, and risk management, remarkably close to the same four areas our panel discussed.

As one researcher put it, AI maturity isn’t measured by how much you automate, but by how predictably and responsibly you can do it again tomorrow.

Closing Thoughts 

The “four pillars” are habits. They’re the daily, sometimes unglamorous disciplines that turn AI from a presentation topic into part of how an enterprise actually succeeds.

As Rahim put it, “AI maturity isn’t about doing everything, it’s about doing the right things, the right way.”

Perhaps that’s the real marker of maturity: when AI stops feeling like an initiative and starts feeling like the way you work.

share this episode
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.