Leading Insights Blog

The Need for Adaptive Data Governance in the Frontier of Artificial Intelligence (AI) and Automation

By Maggie Arnold, Mark Linn, and Connor McCarthy

An adaptive data governance framework is necessary to keep pace with the rapid advancement of AI and automation tools. Agencies can mitigate the unique and unprecedented risks associated with these new technologies through a continuously evolving data governance framework that is tailored to the entity to ensure the appropriate stewardship of data, accountability, and privacy. Without such safeguards in place, emerging technologies could present significant risks that go along with the benefits.

Background

In 2019, the Foundations for Evidence-Based Policymaking Act was signed into law, modernizing the Federal Government’s data management practices and including provisions such as the Open, Public, Electronic, and Necessary (OPEN) Government Data Act to require Federal agencies to appoint Chief Data Officers (CDO) and establish comprehensive data inventories.1 The Act demonstrated the Government’s ability to consider how data needs to be managed within the Federal landscape. While the Foundations for Evidence-Based Policymaking Act established important requirements for maintaining the confidentiality, integrity, and availability of Federal data, it could not foretell the rate at which AI-based tools would be integrated into all aspects of technology, nor the rate at which the technology would be adopted by agencies.

In recent years, use of data has rapidly expanded to improve processes, create efficiencies, and drive decision-making across the Federal space, with agencies such as the National Oceanic and Atmospheric Administration (NOAA), Department of Veterans Affairs (VA), and U.S. Patent and Trademark Office (USPTO) publicizing success stories.2 Meanwhile, other agencies like the Social Security Administration (SSA) are voicing intent to integrate AI-based tools to create process efficiencies to offset budgetary and staffing constraints.3

The Frontier of AI and Automation

Coinciding with automation and AI tools made available by the private sector, technology is evolving at a breakneck pace and requires an agile approach to keep up. In response, the Government had to adapt and evolve its data governance policies as Federal agencies are navigating challenges related to safety, security, personnel, and balancing regulation with innovation while adopting AI technologies. In October 2023, an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of AI4 was released to require agencies to begin implementing safeguards, expand transparency, advance innovation, and strengthen the governance of AI use within the Government. The EO required the Office of Management and Budget (OMB) to establish an interagency council to coordinate AI use by Federal agencies and develop appropriate guidance and designate a Chief AI Officer (CAIO) within 60 days of the order’s issuance, establish governance bodies, document and report AI use case inventory, and report on AI cases not subject to inventory. However, with the transition to the new presidential administration, President Trump rescinded the Biden EO on January 23, 2025, under a separate EO, Removing Barriers to American Leadership in Artificial Intelligence5, thereby nullifying the existing requirements.

With the EO rescinding the original AI EO and requiring the review of any other existing guidance on Federal AI use, the responsible use of AI now falls to the individual agencies to create their own approaches until the Administration mandates further requirements, if any. Now, more than ever, there is the need to implement and maintain a more adaptive data governance framework that effectively acts as a bulwark against new and emerging risks and integrates the data governance function into the AI governance programs to create a flexible framework. Adaptive data governance represents a framework that employs methods of continuous monitoring and improvement (e.g. Feedback Loops, Key Performance Indicators [KPIs], Key Risk Indicators [KRIs], or Data Drift Monitoring, etc.) to ensure agencies can proactively and effectively respond to data-related risks and considerations. While specific AI guidance in still in flux within the new Administration, the maintenance of AI related policies, procedures, and controls should remain a priority as a part of establishing and monitoring a strong internal control environment under broader, existing guidance such as Federal Information Security Modernization Act of 2014, Government Accountability Office (GAO) Standards for Internal Control in the Federal Government, Federal Managers’ Financial Integrity Act (FMFIA), and Federal Financial Management Improvement Act (FFMIA).

Why Does Data Governance Need to be “Adaptive?”

At the core of new automation technology and tools is, of course, data, which is needed to train and influence the decision-making capabilities of AI models and tools. However, the age-old mantra of “garbage-in, garbage-out” rings true regarding data quality and use of data. Therefore, it is important to consider the following questions: What evidence is being provided to ensure these AI tools are using appropriate and accurate data in a consistent and correct manner? What level of confidence can be established that the data results being yielded are limited in their bias? What assurance is given to confirm that the data being used in AI models is secure and private? What can be done to proactively stay on top of changing technology in the absence of requirements that cannot reasonably match the speed at which technology changes? These questions indicate that for a data governance framework to remain successful in our current technology environment, it cannot be a “set it and forget it” approach. These considerations highlight the need for a data governance framework that provides foundational structure for the ways in which agencies manage data, while remaining flexible enough to actively address changes in the technology landscape.

How to Establish and Maintain an Adaptive Governance Framework?

The Federal Data Strategy Playbook6 provides practices to fulfill the requirements of the Evidence Act, and while the playbook offers ways to establish the initial governance program, several overarching tactics are highlighted below to assist in creating an adaptive environment.

Tactic: Collaboration

Quality Personnel: Data governance involves people, processes, and technology; therefore, it is imperative that any data governance team is composed of personnel who understand data governance principles and can help drive the process.

Stakeholder Involvement: Governance collaboration needs to include personnel outside of Subject Matter Experts (SME), Information Technology (IT), internal control experts, Chief Financial Officers (CFOs), and specific data personnel. Ideally, collaboration should include personnel from operational and financial agency components, as well as providing additional perspective on how data is used across the agency. Participation should include the designated CAIO and any support staff. Input from key stakeholders promotes the sharing of best practices and can aid in the development of standardized policies, procedures, and tools to assist in mitigating risks.

Communication Tactics and Channels: Agencies should establish formal avenues of communication to ensure a forum is always available to discuss governance-related issues and risks. Forums may include both live discussions or virtual mediums and workspaces to foster collaboration when dedicated meetings are not an option (e.g., SharePoint, Microsoft Teams).

Tactic: Accountability

Roles and Responsibilities: Agencies should formally establish roles and responsibilities to ensure decision-making and reporting objectives are met; otherwise, collaborative efforts may stall.

Designated Personnel: Within formal roles and responsibilities, designated personnel should be chosen in a timely manner to ensure governance objectives are met. While agencies are already required to have a CDO, secondary roles should be considered to make governance a collaborative effort and prevent siloing.

Tactic: Continuous Improvement

Oversight Body Participation: The data governance body, chaired by the CDO, should obtain input from end users and other stakeholder groups, as well as report the effectiveness of data governance strategies entity wide.

Evaluation Intervals: Established review schedules ensure that agencies maintain a consistent monitoring approach and prevent lapses in review periods in which issues may go undetected.

Maturity Assessments: Within the review schedule, agencies should integrate routine maturity assessments to identify recurring issues related to AI areas such as model bias. Bias is an issue that cannot be permanently remedied and, therefore, represents a consistent factor for consideration. Maturity assessments should also consider areas in which controls within the data governance framework consider the process for modifying and implementing changes to models that are actively utilizing or being trained with agency data. The output of maturity assessments should include gaps within the governance structure and areas that require additional resources (e.g., funding, staffing, training).

Privacy and Security: Leveraging the results of existing privacy and security requirements under other laws and regulations (e.g., Federal Information Security Modernization Act of 2014 [FISMA], National Institute of Standards and Technology [NIST]) ensures that agencies can integrate monitoring of privacy and security considerations within the data governance framework.

Looking Forward – What Might the Future of AI and Data Governance Look Like?

As the frontier of AI automation continues to unfold, there have been predictions on the future impact to data governance as the complexity and capability of AI models escalates. Some predictions include the creation of stricter guidelines to ensure automation does not lead to a decline in data quality, the performance of more stringent compliance checks, methods to facilitate collaborative data-sharing, and real-time data monitoring. Beyond compliance considerations, there remain questions over ethics and privacy to ensure AI is not encroaching on free will, how data can be leveraged to create misinformation. However, the most critical question remains: Can data governance laws and regulations keep up with the rapid changes? Until the answer to that question becomes clear, the responsibility to create more adaptive data governance frameworks resides with the users of AI and is integral to achieving future success and mitigating risk.

Connect with Us

This publication is for informational purposes only and does not constitute professional advice or services, or an endorsement of any kind.

Kearney is a Certified Public Accounting (CPA) firm focused on providing accounting and consulting services to the Federal Government. For more information about Kearney, please visit us at www.kearneyco.com or contact us at (703) 931-5600.

1H.R.4174 – 115th Congress (2017-2018): An act to amend titles 5 and 44, United States Code, to require Federal evaluation activities, improve Federal data management, and for other purposes | Congress.gov | Library of Congress
2Federal AI Use Case Inventories – AI.gov
3SSA Chief Touts AI Use Amid Staffing, Budget Woes – MeriTalk
4Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House
5Removing Barriers to American Leadership in Artificial Intelligence – The White House
6fds-data-governance-playbook.pdf

To top