Thought leadership
-
August 7, 2025

We're Launching The AI Trust Summit

4 min read

TL;DR AI agents need enterprise data to work, but that creates new risks most companies aren't prepared for. The AI Trust Summit will gather senior leaders who are actually solving AI trust challenges, from data governance to security, for actionable strategies you'll be able to implement immediately.

Adrianna Vidal
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Stay Informed
Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

While companies race to deploy AI agents that can automate finance workflows, customer operations, and compliance processes, a critical gap is widening. These systems need access to enterprise data to work, and that creates new risks that most organizations aren't fully prepared to manage.

We're launching the AI Trust Summit in early 2026 because the conversation about AI trust needs to happen now, before more companies learn the hard way what happens when AI systems act on stale data, expose sensitive information, or make confident-sounding decisions based on ungoverned datasets.

What is AI trust?

AI Trust, at it's core, is about whether you can confidently deploy AI systems that access your enterprise data without creating business-critical risks.

Think about it: an AI agent helping your collections team needs access to customer account data, payment histories, and communication records. If that data is stale, inaccurate, or poisoned by a malicious actor, the agent might draft messages that damage customer relationships or violate compliance requirements. Even worse, it might expose personally identifiable information in contexts where it shouldn't appear.

Air Canada learned this lesson when their AI-powered chatbot incorrectly promised a bereavement fare discount that didn't exist. A tribunal held them legally liable for their agent's autonomous response. Whether the error came from a model hallucination or unverified internal data, the outcome was the same: real financial and reputational consequences.

Why AI trust matters more than ever

Agentic AI is expanding the surface area for automation faster than most security and governance frameworks can adapt.

Here's what we're seeing in the field:

Data quality risks are unavoidable. Unlike other AI risks that can be managed through narrow permissions, data quality issues affect nearly every agent scenario. Data freshness changes daily or hourly, and agents can easily produce confident-sounding but incorrect outcomes when working with stale information.

Sensitive data exposure is widespread. Recent research shows that 8.5% of prompts submitted to major foundation models contain sensitive data, with nearly half categorized as customer information. Business users are already engaging with AI in workflows involving sensitive information, often without realizing the risk.

Ungoverned data creates hidden vulnerabilities. Enterprise data warehouses and lakes contain testing data, sample datasets, and other information not intended for production use. Agents may lack the context to distinguish between reliable, governed datasets and unreliable ones.

The leaders deploying agentic AI at scale understand they need visibility and governance over how their agents access data. The ones who don't are setting themselves up for public failures, budget cuts, or worse.

Building the future of trusted AI

The AI Trust Summit will bring together the practitioners who are solving these problems in the real world. We're talking about the CIOs deploying agents in finance and HR functions, the AI leaders establishing steering committees, and the data governance professionals creating the frameworks that make trusted AI possible.

You'll learn about practical strategies for:

  • Mitigating risk when deploying AI systems at scale
  • Building governance requirements that actually work
  • Future-proofing enterprise AI strategies beyond the sandbox
  • Creating AI trust culture and tooling that scales

We're planning an intensive, in-person day of presentations and discussions designed for the people responsible for making AI work safely in enterprise environments.

Join the conversation

The AI Trust Summit will take place in early 2026 in California.

Want to stay in the loop? Sign up here and we'll keep you updated as details emerge.

share this episode
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.