Engineering
-
September 6, 2022

7 ways to level up your dbt tests

Dbt tests are a great first step for organizations that want to improve data quality and reliability. After a time, a second follow-up step becomes necessary. That's when observability comes in.

Kyle Kirwan

When a company using dbt starts to experience data pipeline problems, they usually turn to dbt tests. That makes sense; it's the logical first step in mitigation.

Dbt tests work wonderfully for organizations that want to improve data quality and reliability. But after a time, a second step becomes necessary. Testing has inherent limitations; namely, tests provide information solely from the areas that are tested. But what about the areas you haven't tested, because you don't have the time or resources?

Ideally, at some point, organizations shouldn’t just test their data pipelines. At a certain scale, organizations need to fold in some data observability. In this blog post, we’ll cover 7 ways to go deeper with dbt tests. Plus, we'll cover the indicators that it's time to add a data observability component to your data pipeline.

1. Understand the different types of dbt tests

There are two types of dbt tests:

Data Tests are specific queries that you run against a specific model. These live in the “tests” folder of your dbt project.

Schema Tests are generic tests that can be configured and applied to multiple models.  Out of the box, dbt has four schema tests that can be immediately used: unique, not_null, accepted_values, and relationships.  If you want to write your own custom schema tests, these are written as macros with the prefix test and stored in the `macros` folder.

If you have one model that you want to validate in a specific way, you can simply write a data test in the tests folder.

If you find yourself copy+pasting that query and configuring it for multiple models, then you may consider changing this from a data test over to a schema test so that you can write the test in one place and apply it to multiple models.

2. Use plugins and packages

Dbt has a rich ecosystem of packages that integrate with it, including:

  • OpenLineage - an open platform for collection and analysis of data lineage. It tracks metadata about datasets, jobs, and runs. OpenLineage consumes data produced by dbt runs.
  • dbt_expectations -  a dbt plugin that allows users to deploy GE-like tests in their data warehouse directly from dbt
  • Unit Tests for Macros

3. Schedule dbt tests for all dbt runs

Ideally, if you have a dbt job that runs daily, the associated dbt tests should run on the same schedule. To do this, you can configure your dbt job to have multiple commands, including tests, running, freshness checks, and more. If any step fails you can get an email and a slack notification.

4. Run dbt tests on a model change/new PR

Whenever a PR is created in your database repo with a migration, you should use dbt tests to verify that the changes won’t break your dbt models before merging the PR.

The best practice here is to use data model blue-green deployment. This means that your CI/CD tool should first do the `dbt run` and `dbt test` commands targeting a staging database environment OR a staging schema within the production environment. Then, if the tests pass, do `dbt run` against the production environment or the production schema.

To break this down:

Blue-green data model testing

  1. Run `dbt run` in staging (either a separate staging schema or a separate staging database)
  2. Run `dbt test` in staging
  3. If all the tests pass, then run `dbt run` in prod

You should make sure that your staging environment is fully replicated so that the initial staging `dbt test` results are an accurate proxy for what will happen in production.

5. Run dbt tests on your source models

To catch potential problems as early as possible, you should always run “dbt test” against your source models first:

  1. Run dbt tests against the source
  2. Run the dbt job
  3. Test the resulting outputs

In other words:

dbt test --models source:[sources]

dbt run -m [models]

dbt test -m [models]

6. Distribute dbt test results

To make sure that you are actually aware when something in your dbt transformations goes wrong, set up an interactive way to display and visualize your dbt test results.

There are a few tools, like re_data and elementary, that do this for you, or you can also spin up something custom: The data team at Snapcommerce, for example, leveraged dbt artifacts and the query log to build reporting and alerts on dbt model/job runs and performance.

This may also be a good point to begin considering a more generic data observability tool like Bigeye.

7. When appropriate, add observability into your data pipeline

While testing and observability have the same ultimate goal – detecting problems – they go about it in two different ways.

Testing represents a human with knowledge of the data, expressing a condition that they know should be true. It is opinionated.

Observability, on the other hand, is unopinionated. It is purely asking, do we know the state of our data system at all times? Do we know how the state is changing? And will we know if the state changes in a way that is concerning?

A good analogy for observability is the dashboard on a vehicle. You can do without it, but most people would be uncomfortable not knowing how fast they're going, or whether the engine light is on. On the other hand, it doesn’t make sense to test the speed or the engine light setting at all times.

In conclusion: Optimize dbt testing sophistication

It’s increasingly apparent that data engineers are going to some trouble to bridge the gap between dbt tests and a more traditional observability solution. Even so, this kind of hacked-together setup only gives you adequate observability into dbt – you remain ignorant of the other parts of your data stack.

Dbt is super popular, and you should definitely leverage dbt testing. However, as you scale your tests, you may realize that the set of things you want to write tests for are infinite. There’s only so much mileage you can cover. That realization is a warning sign that it’s time to catch the longtail of everything else that can go wrong, even beyond your set of tests. To catch things you don’t want to write tests for, and take your analysis to a new level of sophistication, layer on an observability solution. Data observability platforms look at the final data at rest, after everything has been transformed.

share this episode
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.