5 Hard-Won Lessons from the Frontlines of Enterprise AI Leadership
TL;DR: Five enterprise AI leaders shared what separates successful implementations from the 95% that fail: involve legal from day one, don't wait for perfection, prioritize security, focus on measurable business outcomes, and manage organizational change carefully.


Get the Best of Data Leadership
Stay Informed
Get Data Insights Delivered
At our recent Atlanta roadshow, we assembled a panel of executives who've actually implemented AI at scale across multiple enterprises. The group included Chase from OneTrust (15 years in privacy and compliance), Manish from Georgia-Pacific (25 years in analytics), Arbind from Apptad (12 years in data strategy consulting), and Srinivas (24 years in technology), most recently as an SVP leading AI initiatives.
Their candid discussion revealed why so many AI projects fail, and what separates the winners from the 95% that don't make it.
1. Involve legal from project kickoff
The regulatory landscape around AI is evolving quickly. The EU AI Act takes effect January 1st with revenue-based penalties, but even existing regulations create gray areas. "When it's your data going out to OpenAI and Anthropic, are you going to be in a legal hole?" Manish asked.
Most teams treat compliance as a final checkpoint, but that approach creates problems. Srinivas learned this firsthand: "You want your legal and compliance team on your side even before your idea is fleshed out. We had regular standing meetings with compliance teams, including engineers who could answer technical questions in real-time."
The key is addressing concerns proactively. Legal and compliance teams want to enable business initiatives, not block them. As Chase noted, "Every legal, compliance, and risk team's North Star is not being seen as a blocker. They want to add business value and accelerate data team initiatives."
One practical approach: Set up weekly meetings with legal and security from project start. Cover where data is stored, who has access, and what your mitigation plan looks like if something goes wrong. This upfront investment prevents costly rebuilds later.
2. Don't let perfect be the enemy of good
Georgia-Pacific operates manufacturing equipment worth billions of dollars, where operational mistakes can have serious safety consequences. Even in that high-stakes environment, they prioritize getting working solutions deployed quickly.
"We got the data, it may not be perfect, but we had a workable product very quickly. 80% of the value is good," Manish explained. Their rule: no AI project takes longer than four weeks.
This approach requires discipline around scope. Georgia-Pacific focused on their highest-value use case first: preventing equipment failures and safety incidents using data from sensors across their facilities. They built solutions that help human operators make better decisions, rather than trying to automate everything.
Perfect solutions that never ship don't create business impact.
3. Start with clear friction points
Successful AI implementations begin with specific business problems, not general technology exploration. "Most of the time you are not associating your AI adoption with the business goals," Arbind observed. "Why are you adopting AI, why are you implementing something— it can't be just because everybody else is implementing."
"You gotta start with the big why," Srinivas said. "Whether it's reducing call deflection time by X percent or onboarding customers 5% faster."
Kyle shared an example that illustrates this approach. A collections team wanted to use AI to increase efficiency, but they framed the problem specifically: "How many more collection notices can our team send per day?" The solution? An AI agent that researches accounts and drafts emails for human review. They were able to scale the amount of notices sent out, which collected more revenue for the business. Their rollout directly addressed their previous concerns around bandwidth of the team.
Srinivas described their discovery process at Deluxe: "We did discovery across the organization: where is there real pain? They found finance teams taking two weeks to close books with 10-15 people manually processing invoices, a clear friction point worth addressing.
If you can't measure the business impact in concrete terms like time saved or revenue generated, the use case isn't ready for development.
4. Security requirements can't be overlooked
Security considerations need to be built into AI projects from the beginning, not added later. Kyle shared a cautionary example: a major European fashion brand built an internal chatbot without involving their security team. When the CISO eventually ran a security scan, it revealed significant vulnerabilities. The project was immediately shut down, and the team's credibility was damaged.
"It's really hard to overstate how early you should be working with the security team," Kyle noted.
Srinivas described a more effective approach: "We had a security council. You cannot put anything out, even something as simple as using a chatbot, without pitching your idea and getting approval." While this adds process overhead, it prevents much more costly problems later.
The key is treating security as an enabler rather than a gate. When security teams understand what you're building and why, they can help you build it safely rather than stopping it entirely.
5. Manage organizational change intentionally
Cultural challenges often derail AI projects more than technical ones. Manish faced this exact challenge at Georgia-Pacific, "I have people who are perfectionists with analysis paralysis. I have brilliant grad school kids who move fast but break things. And I have people living in fear of AI. Fear of replacement, fear of failure."
This creates a three-way tension that paralyzes teams. Perfectionists won't ship until they hit 100% accuracy. New graduates produce fast results but lack context and industry knowledge. Experienced employees resist because they're worried about job security.
His solution was architectural: "Mix people who know the architecture and business with people who can write code and fail fast."
The experienced people provided business context and institutional knowledge. The new graduates brought technical speed and willingness to experiment.
But mixing teams is only half the solution. The other half is proving value in a way that builds organizational confidence rather than triggering resistance. Srinivas described their approach at his previous company, Deluxe: "We took a client zero mentality. We want to prove it internally first on business functions like finance, marketing, sales enablement."
They started with their CFO's biggest pain point: accounts payable. Where 10-15 people manually processed invoices and it took two weeks to close books. By solving an internal problem first, they demonstrated concrete value without the added pressure of customer-facing risk. This gave them credibility to expand to more ambitious use cases.
As Srinivas noted: "When you have success, celebrate. Create community, create momentum. It becomes internal competition—'Look what that team did, can you help me solve my problem?'"
The bottom line
These leaders' experiences point to a consistent theme: successful AI implementation depends more on execution discipline than technological sophistication. The organizations getting results focus on clear business outcomes, involve the right stakeholders early, iterate quickly, and manage change thoughtfully.
To find a No Bad Data Tour stop near you, check out our upcoming events!
Monitoring
Schema change detection
Lineage monitoring