Jim Barker
jim-barker
Thought leadership
-
March 25, 2026

The House of Data Series: People & Leadership

24 min read

This paper focuses on the organizational and human side of data programs — roles, culture, leadership accountability, and what it takes to sustain a data program over time. It does not cover technical architecture, tooling, or specific domain practices — those are each addressed in their respective whitepapers across the series.

Jim Barker
Get Data Insights Delivered
Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.
Join The AI Trust Summit on April 16
A one-day virtual summit on the controls enterprise leaders need to scale AI where it counts.
Get the Best of Data Leadership
Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Stay Informed

Sign up for the Data Leaders Digest and get the latest trends, insights, and strategies in data management delivered straight to your inbox.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

House of Data Series

Every strong data program is built like a house. Data Architecture forms the foundation — the platforms, pipelines, and operating model that everything else depends on. Seven domain pillars rise from that foundation, each one essential to a complete data program: Data Quality, Privacy, Data Security, DataOps, Compliance, Data Enablement, and Data Consumption. Data Literacy runs across all seven as a connecting beam, ensuring people at every level can read, interpret, and act on data. At the top, People & Leadership sets the direction, accountability, and culture that holds the whole structure together.

This series of whitepapers covers each component of the House of Data in depth. Each paper was written by a practitioner with direct experience in that domain. Together, they form a practical guide to building data programs that earn — and keep — trust.

Data Leadership Data Literacy Data Quality Privacy Data Security DataOps Compliance Data Enablement Data Consumption Data Architecture

This paper covers People & Leadership — the organizational and human dimension of data programs: how firms structure accountability, build councils, define roles, and create the cultural conditions for data to succeed.

People and leadership

People & Leadership is the pediment of the House of Data — it sits at the top because without the right leadership structures, sponsorship, and organizational model, every pillar below it is at risk. Data programs succeed or fail on the quality of their people decisions: who is accountable, how they're empowered, how they're organized, and whether leadership treats data as a strategic asset or an IT cost center.

People and data

People at all levels of an organization are critical to success. Historically, firms would have a central data and analyst team possibly built inside of IT or the business. This is a strategy that is becoming rare — many firms are finding that in order to become more productive they have to distribute aspects of data to drive their focus. This paper covers a set of topics to detail the challenge and benefits of a people-engagement business model for AI and Analytics that is supported by strong leadership.

"Empowered people are members of 'The Data Generation.'"

Tom Redman, The Data Doc. Source: Data and Human Empowerment, DGIQ December 2023

Core areas of consideration

The following section takes the concepts above and provides some ideas on how to make them work in a productive manner.

Framework: pitch, vision, strategy, and roadmap

Many firms spend a great deal of time and money writing a "Data Charter" or figuring out the process map for how they are going to work. While that is a noble quest, we recommend starting with some actionable areas and then growing this into an OpsModel. This paper is intended to provide that productive idea — one that offers both perspective and flexibility to help firms move forward.

It is recommended that firms start by building out a Pitch and Vision of where they are headed, then document a data roadmap, and sell these ideas to senior leadership to include in the corporate strategy. This section delves into these four areas.

Pitch

The Pitch or Elevator Pitch is the idea of coming up with a 5-10 word slogan that can be used to motivate, educate, and propel forward. It is intended to be provided in a manner that anyone can repeat it, or at least know what it is about when they hear it. One firm came up with the idea of: DATA — Do Analysis and Take Action. Another focused on becoming data driven, and another focused on driving down costs. Think of something that is broad enough everyone will respect it but still motivating enough to move forward.

Ask: Come up with a slogan or pitch on what is happening in the AI, data analytics, or data governance world.

Vision

The vision is a fairly short and compact description of where data is going in the next two years in your organization. While corporate strategies are often five years in length, data moves too fast for a 5-year plan. Firms need to establish a two-year set of objectives and build out a roadmap that tells the story of how they are going to get there. The ability to articulate your vision for data is critical to bring people together with their time, talent, and treasure to make a difference. If you can't articulate a vision, how do you know where you are going?

Roadmap

Most people working with data interact with software companies that have a roadmap, so why not data practices internally? The idea with the roadmap is to have a graphical representation that tells the story of where you are, what your key objective areas are (from the vision), and what the projects are that will help achieve that vision over the next two years. This picture should be shared broadly, reviewed monthly, and used to guide activities. Yes, it will change as priorities shift, but having strong alignment between the Pitch, the Vision, and the Roadmap makes a huge difference. Alignment of people, pitch, vision, and roadmap is how firms progress as they would like.

Strategy

Firms often pay top-dollar to establish a grandiose "Data Strategy." We will leave those topics for another day. This idea of strategy is more direct:

  • Add to your corporate strategy one or two bullets that tie data to all other parts of your business.
  • In your annual report, have a bullet that calls out how your firm is going to "Use Data to be Better."

This doesn't need to be some big grandiose statement. But by having a Pitch, a Vision, and a Roadmap, it is vastly easier for senior leadership to look to data to make a difference for the business.

Data councils

One of the areas that most firms focus on is a data governance council. In this paper we take a different approach. We cover three items: (1) a recommendation for some levels of councils; (2) establishment of a Community of Practice; and (3) why councils don't work and how you can make them better.

Three+ councils

The term "data governance council" is looked at differently by almost every organization. Many have councils in place and fail to see the benefits — they have varying levels of participation and turn it into something that is less than constructive. Here is a novel approach: the Three+ model. Build out data councils at three different levels.

Data Governance Council (tactical) — Rather than having yet another meeting of mid-managers with lukewarm beliefs in data stewardship, data quality, or data in general, establish a data governance council that includes your key stewards across business functions. In every organization there are data stewards who make a difference. These folks may not be the "chosen few" but they are massively important to your organization. They know: (1) the data; (2) how we got here; (3) who to trust and how not to trust across functions, subject areas, and business units; (4) how to get things done; and (5) where the bodies are buried. They also have a keen understanding of the hidden data factories inside your organization. Great data governance councils are the ones that bring these people together weekly to share experiences, help, and find ways to progress forward. It is a forward-motion type of group and these councils make a huge difference.

Data Steering Council (strategic) — This is a group of mid-managers who know what the data stewards are doing, are aware of the senior leadership direction, and can help steer data resources in the right way. These folks tend to get together monthly, compare notes, review the status of projects, and both guide the data governance council on direction while also advancing data efforts with senior leaders.

Executive Data Council — This is a tougher one. At its best, it helps get senior leaders aware of data, provide direction and insights, and builds support for data efforts. The challenge is two-fold: first, these folks tend to be disinterested in data or yet another quarterly meeting and lack the time to participate. Second, they tend to talk about the things they care about and these meetings drift when the full audience turns up.

A way to handle this is to avoid another council, but become part of loosely aligned similar councils. Many firms find doing this with a "Digital Transformation" or "AI Council" or "Business Transformation Council" is much more effective. Here is another recommendation: quarterly, write up a state-of-data report. Articulate where data is, where it is going, and share a couple of standard reports. Get on the agenda for one of those other councils, align data needs to that council's charter, and get a chance to address 1-3 points with a report shared outside the meeting. This gets executive ears and also increases efficiency, which those leaders will appreciate. The goal of the Executive Data Council idea is to share and gain support for the day you need it.

Tiger teams (the "plus") — The plus in this Three+ model is tiger teams: small, temporary groups that work cross-functionally to drive out a key need. While the formal councils will get a cadence and a rhythm, tiger teams make the short-term difference. They spin up, have a clear charter, complete the work, and end. They help address strategic needs and are short-term enough that they stay focused from start to finish.

In short, getting a Data Governance Council running and supported by Steering and Executive influence will help progress data forward. Using tiger teams to address specific needs will be the game-changer.

Community of practice

While the councils help get structure and move things forward, getting the masses of people consuming data connected will make a huge difference. One way to do this is with a Community of Practice. Document who the data consumers are and organize a quarterly brown-bag type meeting with a highly productive agenda. A strong agenda for these sessions looks like:

  1. Introduction — Welcome to the Data Community of Practice.
  2. Software heads-up — Any announcements about changes in software and capabilities that exist internally. Keep it brief.
  3. Business topic — Cover one deep-dive data business topic. Make it helpful to the broad community. Share how something can be done, or a business problem and solution. Make it something people will think: "Should I try that?"
  4. Technical topic with business focus — Cover one technical item, a how-to or key technical detail most will benefit from. Tie this back to business value.
  5. Brown-bag Q&A — An open question-and-answer period to understand challenges and help move forward. Live Q&A can be helpful. You may need to seed the first session with a question or two to get things moving.
  6. Summary — Wrap up and always ask for new topics and feedback.

If you do this right, attendance will be strong, and you will see the number of people participating grow over time. It is meant to help bring people together. Data is at its best when we all work together.

Personas

Software companies define personas to establish who they sell to, who uses their products, and who they should message to. In running data programs a different set of persona classes need to be defined, and these personas can be used to build out highly effective task assignments. These persona groups should be built around four areas and used to define who does what in a RACI:

  • Creators — The wide range of people performing tasks for your organization across functional, geographical, and business unit boundaries. They create the data that is used to determine how to operate your business. Without data creators, no data exists and your business fails to function. This group is made up of SalesOps, Finance, Logistics, Manufacturing, and every customer, prospect, or partner who executes transactions in your internal or external systems.
  • Builders — The group of people who build data pipelines, analytics, and AI capabilities across your organization. This group is largely data engineers, scientists, and stewards. This group builds data capabilities for everyone to leverage.
  • Support — The group of leaders, auditors, IT, and support staff that push the direction, capabilities, and make sure things are running smoothly and within the parameters of your organization's policies and standards. This group includes business leaders, chief data officers, auditors, security and privacy, and GRC staff.
  • Consumers — The wide range of people who consume data for their transactions as staff, support, customers, suppliers, or audit functions.

Foundational topics

No more heroes

Many software companies like to build the idea that their software or capabilities creates data heroes. That notion should be avoided. Building individuals with strong capabilities is a good thing, but creating heroes can be toxic. We should build collaborative groups of individuals who work together as strong teams, or expand to a community of practice for rapid success and long-term benefits.

Shadow IT

Some firms that struggle to get what they need out of their IT groups start building their own capabilities. These groups will often invest in their own platform licenses, software, and training programs to build solutions from data privacy, quality, and operational staff. This also should be avoided. Find a way to collaborate across business and IT instead. Keep these points of view:

  1. Firms seek answers based on how well they work together.
  2. IT has vast experience and capabilities that can be shared across the organization, in relation to tool selection, testing, maintenance, and lower total cost of ownership.
  3. IT wants to help.
  4. Business teams have a great deal of work to do and are always trying to find ways to make things easier — IT has to be part of that equation.
  5. Collaboration is the idea of using key staff members in their area of expertise to help others build out their capabilities with expert guidance.

Shadow IT often unravels when audit findings, data privacy violations, data security breaches, or other hidden costs are exposed. Stronger outcomes occur when those difficult topics are worked collaboratively between business and IT experts in their domain of experience. Business teams who include (not exclude) subject matter experts, and IT staff who engage and show enthusiasm, drive out the most positive outcomes.

Data owners vs. data ownership

Who owns data? Over time many different models have come up. The following ideas delve into the concept of data ownership.

  • Data Product Owner — Plays a key role owning all aspects of a data product. They often bridge the needs of functional business stakeholders and users, data analysts, data engineers, and data scientists. They are involved in promoting the existence of the capability after it is built, and in building the case and assisting with requirements as data assets are built, maintained, and enhanced along the lines of business needs.
  • Functional Data Expert — Brings together the roles of functional data analyst, functional data owner, and business subject matter expert (SME). This person is responsible for ensuring that data meets the needs of a specific business function (like Finance, HR, Logistics, Sales, or Production). They play a role in making sure data is accurate, meaningful, and actionable. Their core responsibilities are driving out fit for purpose, proper maintenance, and appropriate use. They often work across business operations, data governance, and more technical teams translating what data means for the business.
  • Data Owners — Often looked at as the person responsible for data, they should in the best of times verify that the data is accurate, secure, compliant, and used properly. They are responsible for who has access to the data, how it is classified for use in line with privacy regulations, and that it is of proper quality. This person tends to be a fairly senior executive and is widely regarded as the data owner of critical data.
  • Data Ownership — The more granular topic of how data is owned. Rather than tying to one owner, it is the idea of what group of individuals — either named or by team — takes ownership of specific pieces of data. They build out processes and provide oversight for data quality, data privacy, data security, and the knowledge assets that describe that data.

The use of these terms varies across businesses, industries, and situations. Some firms have the traditional belief that there should be one owner for each piece of data. That view has faded over time. Material data, as an example, is impossible to have one data owner for — often there are attributes that relate to different functional groups such as Manufacturing, Distribution, Logistics, Marketing, or Sales. Due to this, it is common for businesses to focus on what group of individuals have data ownership, perhaps at the object or attribute level. Their leadership is the executive view (the Data Owner), but knowing who the Data Steward, SME, or Engineering Expert is provides greater value. The importance of determining data ownership or data owners is all in how those titles are used inside of an OpsModel or RACI: that is "how you work."

Centralization and decentralization

Centralization is the establishment of a central governing body, framework, and processes to define, enforce, and oversee data policies, standards, quality, access, and compliance across the organization. It is viewed as the traditional approach to data governance. In the past, larger data teams would take on this role, and it is one of a few models that firms rarely employ for data governance and data management.

Decentralization is often thought of as distributed data governance, where each business function, business unit, department, or domain manages its own data. This can or should include addressing topics such as: (1) Data Quality; (2) Data Privacy; (3) Data Access; (4) Business Definitions; and (5) Compliance. Rather than having one central group, this is spread across user groups.

Federation and Data Mesh — Federation is another term for decentralized data governance. A more recent solution to this which pushed federation is Data Mesh. Data Mesh is often viewed as a combination of an architectural and organizational approach to managing data as a product and having data managed by domain teams from creation to consumption. It was meant to be the end-all be-all for deciding about centralization vs. decentralization and has opened up debate.

The goal with this entire debate about the roles and scale of centralization is to build out an OpsModel that works for you. How are you going to operate relative to data? There is no one-size-fits-all approach, but the best organizations find ways that work for them. Often this is a combination of some sort of central team for organization and progress that aligns the organization and distributes work to the areas that make the most sense. Firms that find a way to get people to work together (alignment), have common understandings, and progress to meet the needs of their organization with an ever-increasing pace (momentum) will be successful.

OpsModel

Bringing it all together in a model on how you operate is the payback of these concepts and needs. This is an OpsModel — think of it as "how you work." At its core it is largely two objects:

  1. RACI — By activity, what are the steps completed by each persona.
  2. OpsModel — The overall process that brings people together, encompassing:
    • New requests
    • Data quality needs
    • Data privacy requests
    • Data security grants and rescissions
    • Data enablement — setting staff up for success with data
    • Data consumption — the using of data to drive the business forward
    • Data oversight — verifying that data is being used within organizational standards and not raising up any other requests

The bottom line of the OpsModel is to understand how things are done, and share that across the organization to make things work as efficiently as possible.

The role of people in AI trust

People benefit from the abilities of AI to drive forward business processes — to eliminate repetitive tasks or lower the lift to do other jobs. Yes, some folks will be concerned about losing their job to a machine, while most see the efficiencies within those roles. To that end, data collection, data creation, and consumption are some of the ways to work together to leverage data while still playing a role in the area of AI Trust.

The stewards help to enable staff to be involved in the administration and creation of AI processes. There needs to be people involved in review and oversight of the administration of data. By reviewing how data is being used, how it should be used, and what data should be manipulated by AI, the level of AI Trust will only grow. The key cogs in any organization are the users of data — creating, correcting, and stewarding — who make it work so a firm can build AI Trust.

How Bigeye supports people and leadership

The key to the people side of data is to understand how important they are, how things operate, and who to get help from. While Bigeye plays less of a role in establishing people structures directly, it does one main thing: it reduces the lift to do all the different tasks that people need to complete, including:

  1. Showing where data comes from, the provenance of the data — its origination sources and how it is transformed for value.
  2. Providing a mechanism for impact analysis: if you change data in one structure, what else will be changed.
  3. Creating a set of alerts when pipelines fail or do not provide the volume of data expected for a transaction, or the data is not delivered when expected.
  4. Indicating when data quality thresholds are not met.
  5. Reporting out the number and magnitude of data issues by data quality dimensions.
  6. Assisting in the management of issues related to data.
  7. Bringing people together for managing and improving data pipelines, data quality, and understanding the flow of data.

Summary

People play the most vital role in working with data and driving business benefits. This paper takes some time to cover common misconceptions, highlights how to organize councils to help your data program, and covers the personas involved in making data work. Additionally, the more automation and AI is used to run business processes, the more important staff is to ensure the right data is being used in the right way.

Explore the Series

Every great data program is built from the ground up.

The House of Data breaks down the ten pillars of a mature, trustworthy data organization. Click any section to explore that paper.

Data Leadership Data Literacy Data Quality Privacy Data Security DataOps Compliance Data Enablement Data Consumption Data Architecture

References

  • Data and Human Empowerment, DGIQ December 2023
  • David Plotkin. Data Stewardship: An Actionable Guide to Effective Data Management and Data Governance, 2nd Edition.
  • Dr. James Barker. Data Success with People. DGPO, August 2024.
  • Jimm Johnson, HireRight. Best Practices in Data Stewardship. 2022.
  • Lynn Marsh et al., Principal Financial Group. Keeping Stakeholders Engaged.
  • Gwen Thomas, World Bank. Emerging Data Specialties.
share with a colleague
Resource
Monthly cost ($)
Number of resources
Time (months)
Total cost ($)
Software/Data engineer
$15,000
3
12
$540,000
Data analyst
$12,000
2
6
$144,000
Business analyst
$10,000
1
3
$30,000
Data/product manager
$20,000
2
6
$240,000
Total cost
$954,000
Role
Goals
Common needs
Data engineers
Overall data flow. Data is fresh and operating at full volume. Jobs are always running, so data outages don't impact downstream systems.
Freshness + volume
Monitoring
Schema change detection
Lineage monitoring
Data scientists
Specific datasets in great detail. Looking for outliers, duplication, and other—sometimes subtle—issues that could affect their analysis or machine learning models.
Freshness monitoringCompleteness monitoringDuplicate detectionOutlier detectionDistribution shift detectionDimensional slicing and dicing
Analytics engineers
Rapidly testing the changes they’re making within the data model. Move fast and not break things—without spending hours writing tons of pipeline tests.
Lineage monitoringETL blue/green testing
Business intelligence analysts
The business impact of data. Understand where they should spend their time digging in, and when they have a red herring caused by a data pipeline problem.
Integration with analytics toolsAnomaly detectionCustom business metricsDimensional slicing and dicing
Other stakeholders
Data reliability. Customers and stakeholders don’t want data issues to bog them down, delay deadlines, or provide inaccurate information.
Integration with analytics toolsReporting and insights

Why is People & Leadership at the top of the House of Data rather than the foundation?

Because leadership doesn't execute the technical work — it creates the conditions for everything below it to succeed or fail. Without sponsorship, clear accountability, and an organizational model that empowers the right people, every pillar below is at risk of being underfunded, deprioritized, or undermined by politics. The pediment of a temple sits at the top and is load-bearing in a different way than the columns: it defines the structure's purpose and holds the whole thing together.

What is an OpsModel and why does the paper recommend it over a data strategy?

An OpsModel describes how data work actually gets done: who handles which type of request, what the steps are, how issues get escalated. It's operational rather than aspirational. The paper recommends starting with an OpsModel over a formal data strategy because it creates immediate, practical alignment. A strategy describes where you want to go. An OpsModel describes how you work — and how you work determines whether the strategy ever gets executed.

How does the People & Leadership paper address the concern that AI will eliminate data jobs?

By reframing the role of people in AI-enabled programs. More automation means more need for human oversight, not less. Someone has to define what data AI systems are allowed to use, review outputs for quality and bias, and flag when something looks wrong. The stewardship and governance functions become more important as AI takes over more execution. The paper's position is that data programs need people who are engaged, empowered, and trained — not fewer people, but people with different emphasis in their skills.

about the author

Jim Barker

Director of Professional Services

Jim Barker is a lifelong data practitioner, industry thought leader, and passionate advocate for treating data as a strategic asset. With more than four decades of experience spanning data quality, governance, warehousing, migration, and architecture, Jim brings a rare blend of hands-on expertise and executive perspective to the evolving data landscape.

Jim’s journey in data began at just 14 years old. Since then, he has held leadership roles across organizations including Honeywell, Informatica, Thomson Reuters, Winshuttle (Precisely), Alation, nCloud Integrators, and Wavicle, contributing to advancements in data governance, migration methodologies, and enterprise data strategies. His work has included building global data quality programs, developing scalable governance frameworks, and driving innovation recognized across the industry.

His research and writing focus on lean data management, governance strategies, and the intersection of AI, data quality, and enterprise value creation.

Now at Bigeye as Director of Professional Services, Jim is energized by the company’s vision for data observability and its role in shaping the future of trusted data. He continues to share his perspectives through writing and speaking, aiming to elevate the conversation around data, cut through industry noise, and help organizations do data the right way.

Outside of work, Jim enjoys coaching and spending time with his family, often on the basketball court or soccer field, where many of the same lessons about teamwork, discipline, and leadership apply.

As Jim puts it: “Data matters.”

about the author

about the author

Jim Barker is a lifelong data practitioner, industry thought leader, and passionate advocate for treating data as a strategic asset. With more than four decades of experience spanning data quality, governance, warehousing, migration, and architecture, Jim brings a rare blend of hands-on expertise and executive perspective to the evolving data landscape.

Jim’s journey in data began at just 14 years old. Since then, he has held leadership roles across organizations including Honeywell, Informatica, Thomson Reuters, Winshuttle (Precisely), Alation, nCloud Integrators, and Wavicle, contributing to advancements in data governance, migration methodologies, and enterprise data strategies. His work has included building global data quality programs, developing scalable governance frameworks, and driving innovation recognized across the industry.

His research and writing focus on lean data management, governance strategies, and the intersection of AI, data quality, and enterprise value creation.

Now at Bigeye as Director of Professional Services, Jim is energized by the company’s vision for data observability and its role in shaping the future of trusted data. He continues to share his perspectives through writing and speaking, aiming to elevate the conversation around data, cut through industry noise, and help organizations do data the right way.

Outside of work, Jim enjoys coaching and spending time with his family, often on the basketball court or soccer field, where many of the same lessons about teamwork, discipline, and leadership apply.

As Jim puts it: “Data matters.”

Get the Best of Data Leadership

Subscribe to the Data Leaders Digest for exclusive content on data reliability, observability, and leadership from top industry experts.

Want the practical playbook?

Join us on April 16 for The AI Trust Summit, a one-day virtual summit focused on the production blockers that keep enterprise AI from scaling: reliability, permissions, auditability, data readiness, and governance.

Get Data Insights Delivered

Join hundreds of data professionals who subscribe to the Data Leaders Digest for actionable insights and expert advice.

Join the Bigeye Newsletter

1x per month. Get the latest in data observability right in your inbox.