Gaurav Rastogi
gaurav-rastogi
Thought leadership
-
March 6, 2026

Why Data Reliability Needs An Operating Model (Not More Alerts)

6 min read

Data teams often rely on alerts and job monitoring, but that approach misses the real reliability risks: data that flows successfully yet is incomplete, late, or wrong. In this guest perspective, Gaurav Rastogi, Senior Director of Data Analytics at Hertz, illustrates that organizations need a data reliability operating model, built around prevention, detection, diagnosis, correction, and learning in order to protect trust in analytics and AI.

Gaurav Rastogi
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Stay Informed
Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Disclosure: This article reflects my professional experience and observations. Any organizational examples are generalized to avoid sharing confidential or proprietary details. Bigeye is referenced as the observability platform used to help operationalize the framework described.

Every data leader I speak with is being asked some version of the same question: “How fast can we move with AI—without breaking trust?” 

That tension has become one of the defining challenges of modern data organizations. Cloud modernization has increased scale and speed, but it has also multiplied complexity: more pipelines, more dependencies, more consumers, and higher expectations for real-time insights. The result is a reliability gap that many enterprises underestimate until it shows up in the most damaging way: when executives, customers, or automated systems make decisions using data that was technically available, but practically wrong. 

This is not an isolated problem. Gartner has estimated that poor data quality costs organizations an average of $12.9 million per year, reinforcing that data reliability is no longer just an engineering concern, it is a business risk with material consequences. 

Yet most organizations respond to this challenge by adding more dashboards, more thresholds, and more alerts. In my experience, that approach rarely works at scale. It creates noise, fatigue, and reactive firefighting. What’s missing is not more monitoring. It’s an operating model. 

An operating model defines how accountability is structured, how decisions are prioritized, and how feedback improves performance over time. Reliability, in my view, should not be layered onto pipelines as an afterthought—it should be embedded into how the organization runs. 

That realization led me to formalize what I call the Back-of-the-House Data Reliability Model: a framework that treats data pipelines as production systems and embeds reliability into how data organizations operate day-to-day. 

The Industry Problem: Green Pipelines, Wrong Data

Most enterprises still define reliability as job success or failure. That approach worked when data environments were smaller and primarily batch-oriented. It breaks down in distributed, cloud-native ecosystems—especially when analytics and AI consume the outputs.

Across industries, the failure modes are strikingly similar: 

  • Ingestion jobs complete even when source data is incomplete
  • Schema changes propagate without triggering system failures
  • Data arrives late but only some consumers notice
  • Volume shifts appear “normal” until dashboards are questioned 
  • AI models continue scoring on degraded or drifting inputs
An example of the reactive data pipeline cycle that erodes trust.

None of these scenarios necessarily cause platform outages. But all of them erode trust. And “trust outages” are often more expensive than downtime—because they slow decision-making, force manual validation, and create skepticism toward analytics and AI initiatives. 

These recurring patterns expose three structural weaknesses in how most enterprises approach reliability: 

  1. System completion is mistaken for data correctness. 
  2. Static thresholds are treated as intelligence. 
  3. Issues are detected without clear visibility into business impact. 

This pattern comes up repeatedly when I discuss data reliability in industry panels and leadership roundtables. When the problem is named clearly, the reaction is immediate: heads nod, eyes light up,

and the conversation shifts from tooling to accountability. Leaders recognize the issue because they are already living it. 

Traditional monitoring focuses on technical completion. A reliability operating model focuses on behavioral signals, impact awareness, and continuous learning. 

The Shift The Industry Must Make 

The reliability leap the industry needs is not incremental. It is a shift from reactive monitoring to proactive reliability engineering. 

Traditional monitoring focuses on whether systems ran. A reliability operating model focuses on how data behaves, who is impacted, and how issues can be prevented before they affect decisions. Instead of static thresholds, it emphasizes learned baselines. Instead of manual tracing, it relies on lineage and impact awareness. Instead of firefighting, it builds learning back into the system so reliability improves over time. 

This shift is what distinguishes organizations that scale analytics and AI responsibly from those that struggle with constant trust gaps. 

The distinction is subtle but profound. Monitoring asks, “Did something fail?” An operating model asks, “How do we prevent failure, understand impact, and continuously improve?” Reliability moves from a reactive support function to a core engineering discipline. 

The Back-of-the-House Data Reliability Model

The Back-of-the-House Data Reliability Model: Prevention → Detection → Diagnosis → Correction → Learning.

I call it “Back-of-the-House” because reliability, like operational excellence in hospitality or manufacturing, happens behind the scenes—but determines everything the customer ultimately experiences. When the back-of-the-house is disciplined, the front-of-the-house runs smoothly. When it is not, trust erodes quickly. 

The Back-of-the-House Data Reliability Model organizes reliability into five integrated layers. It is intentionally practical: something that can be adopted incrementally without redesigning an entire platform. 

Each layer addresses a distinct failure mode in modern data ecosystems. Together, they form a closed-loop reliability system that reduces uncertainty, accelerates response, and builds institutional knowledge over time. 

Here’s how I describe the five layers when I’m speaking to executives: 

Prevention: reliability starts before failure 

Most organizations begin with alerting. Prevention asks a different question: What does normal look like? By learning baseline patterns for freshness, volume, and distribution, teams can detect weak signals early—before issues propagate downstream. This is where AI belongs in reliability: adaptive baselining that reduces false positives and surfaces subtle drift. 

Detection: move beyond “did the job run?”

Detection must focus on behavioral dimensions that matter to consumers: timeliness, completeness, schema integrity, and distribution changes. These are the issues that break trust without breaking pipelines. Detecting them early changes the entire incident lifecycle. 

Diagnosis: lineage turns uncertainty into clarity 

When an anomaly appears, speed matters. Lineage enables teams to understand not just what changed, but what is affected. Instead of asking which dashboards or models might be impacted, teams can see the dependency path immediately and prioritize response accordingly. This is the difference between reactive scrambling and controlled triage. 

Correction: prioritize remediation using impact 

Not every issue deserves the same response. Impact-aware remediation helps teams focus on what is truly business-critical. Guided workflows and standardized playbooks reduce chaos while preserving human judgment where it matters most. 

Learning: reliability improves over time 

Reliability should get quieter as it matures. Every resolved issue feeds back into detection logic, refining sensitivity and reducing noise. Over time, reliability becomes a discipline embedded into daily operations—not something activated only during crises. 

Individually, each layer strengthens a different point in the lifecycle. Integrated together, they transform reliability from a collection of controls into an adaptive operating system for enterprise data. 

How This Advances The Field: From Observability To Reliability Engineering 

Data observability has been an important step forward, providing much-needed visibility into the health and behavior of modern data systems. But visibility alone does not guarantee reliability. The next evolution is embedding observability into a predictive reliability operating model. As AI adoption accelerates, data quality issues no longer just affect downstream dashboards. They affect automated decisions, forecasts, and customer outcomes. 

AI systems do not question their inputs. They amplify them. That reality raises the bar for reliability. Organizations that treat reliability as an operating model—rather than an after-the-fact check—are better positioned to scale AI with confidence, control risk, and maintain executive trust. 

This evolution requires leadership alignment. Reliability cannot live only within engineering teams. It must be visible at the executive level, tied to decision systems, and measured as rigorously as financial or operational performance. 

This is why the framework has resonated so strongly when shared in industry forums. Several peers have told me they adopted elements of the model after hearing it discussed—because it wasn’t theoretical, it mapped directly to the problems they were trying to solve. 

What Leaders Can Do Now

If you’re a data leader reading this and thinking, “This feels painfully familiar,” here’s where I’d start (without boiling the ocean): 

First, identify your “decision systems.” Not every dataset matters equally. Define the dashboards, metrics, and AI use cases that leadership depends on. Those are your reliability tier-0 assets. 

Next, implement behavioral detection on those assets: freshness, volume, and schema drift. This is where modern observability platforms help operationalize detection at scale, including adaptive thresholding. 

Then, add lineage for impact visibility. Your goal isn’t just detecting anomalies, it’s being able to answer, in minutes, “What breaks if I don’t act?” 

Finally, formalize the operating rhythm: who triages, how incidents are classified, how communication happens, and how learnings are fed back. Reliability isn’t a dashboard. It’s a management system. 

Closing Thoughts 

The most common misconception in the industry is that data reliability is a tooling problem. 

When data incidents occur, the immediate response is often to ask: Do we need better alerts? A different platform? More checks? Those are reasonable questions, but they are focused on the surface of the issue. 

The deeper challenge is not the absence of tools. It is the absence of structure. 

Reliability breaks down when there is no clear definition of what matters most, no shared understanding of impact, and no disciplined feedback loop that improves the system over time. Tools can surface signals. They cannot define accountability or enforce operational rigor. 

In that sense, many organizations are fighting the wrong battle. They are optimizing for visibility when the real requirement is operational design. 

The organizations that scale AI and advanced analytics successfully are not distinguished by how many alerts they generate. They are distinguished by how deliberately they engineer reliability into how decisions are supported, prioritized, and improved. 

Reliability is not a feature to be added. It is a system to be designed.

share with a colleague
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights
about the author

Gaurav Rastogi

Senior Director of Data Analytics at Hertz

Gaurav is the Senior Director of Data Analytics at Hertz, leading enterprise data platforms, analytics operations, and AI enablement across global business functions.

about the author

about the author

Gaurav is the Senior Director of Data Analytics at Hertz, leading enterprise data platforms, analytics operations, and AI enablement across global business functions.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.