We're Launching The AI Trust Summit
TL;DR AI agents need enterprise data to work, but that creates new risks most companies aren't prepared for. The AI Trust Summit will gather senior leaders who are actually solving AI trust challenges, from data governance to security, for actionable strategies you'll be able to implement immediately.
.png)
Get the Best of Data Leadership
Stay Informed
Get Data Insights Delivered
While companies race to deploy AI agents that can automate finance workflows, customer operations, and compliance processes, a critical gap is widening. These systems need access to enterprise data to work, and that creates new risks that most organizations aren't fully prepared to manage.
We're launching the AI Trust Summit in early 2026 because the conversation about AI trust needs to happen now, before more companies learn the hard way what happens when AI systems act on stale data, expose sensitive information, or make confident-sounding decisions based on ungoverned datasets.
What is AI trust?
AI Trust, at it's core, is about whether you can confidently deploy AI systems that access your enterprise data without creating business-critical risks.
Think about it: an AI agent helping your collections team needs access to customer account data, payment histories, and communication records. If that data is stale, inaccurate, or poisoned by a malicious actor, the agent might draft messages that damage customer relationships or violate compliance requirements. Even worse, it might expose personally identifiable information in contexts where it shouldn't appear.
Air Canada learned this lesson when their AI-powered chatbot incorrectly promised a bereavement fare discount that didn't exist. A tribunal held them legally liable for their agent's autonomous response. Whether the error came from a model hallucination or unverified internal data, the outcome was the same: real financial and reputational consequences.
Why AI trust matters more than ever
Agentic AI is expanding the surface area for automation faster than most security and governance frameworks can adapt.
Here's what we're seeing in the field:
Data quality risks are unavoidable. Unlike other AI risks that can be managed through narrow permissions, data quality issues affect nearly every agent scenario. Data freshness changes daily or hourly, and agents can easily produce confident-sounding but incorrect outcomes when working with stale information.
Sensitive data exposure is widespread. Recent research shows that 8.5% of prompts submitted to major foundation models contain sensitive data, with nearly half categorized as customer information. Business users are already engaging with AI in workflows involving sensitive information, often without realizing the risk.
Ungoverned data creates hidden vulnerabilities. Enterprise data warehouses and lakes contain testing data, sample datasets, and other information not intended for production use. Agents may lack the context to distinguish between reliable, governed datasets and unreliable ones.
The leaders deploying agentic AI at scale understand they need visibility and governance over how their agents access data. The ones who don't are setting themselves up for public failures, budget cuts, or worse.
Building the future of trusted AI
The AI Trust Summit will bring together the practitioners who are solving these problems in the real world. We're talking about the CIOs deploying agents in finance and HR functions, the AI leaders establishing steering committees, and the data governance professionals creating the frameworks that make trusted AI possible.
You'll learn about practical strategies for:
- Mitigating risk when deploying AI systems at scale
- Building governance requirements that actually work
- Future-proofing enterprise AI strategies beyond the sandbox
- Creating AI trust culture and tooling that scales
We're planning an intensive, in-person day of presentations and discussions designed for the people responsible for making AI work safely in enterprise environments.
Join the conversation
The AI Trust Summit will take place in early 2026 in California.
Want to stay in the loop? Sign up here and we'll keep you updated as details emerge.
Monitoring
Schema change detection
Lineage monitoring