AI in Government Is Growing Faster Than the Guardrails. What Should Agencies Do?

Artificial intelligence is moving quickly into the federal landscape as agencies are testing automation, exploring predictive analytics, and experimenting with generative tools that promise to streamline operations and support mission delivery. Senior leaders are standing up AI steering committees, and program offices are piloting tools meant to assist with policy development, citizen engagement, acquisition workflows, financial management, and workforce support.

The momentum is undeniable, but I believe we’re facing a difficult truth. The federal government is adopting AI faster than it is preparing for AI. In my view, this gap between adoption and readiness is one of the most significant modernization challenges the government faces right now.

AI offers tremendous potential for public service, but I don’t believe those benefits will be realized unless agencies strengthen their governance, workforce skills, data foundations, and internal frameworks for responsible use. Without that, the technology will outpace the guardrails.

The Adoption Readiness Gap

I’m hearing about awesome AI pilots in many corners of federal government. Some agencies are using AI to classify documents, triage inquiries, draft background summaries, or support logistics planning. Others are experimenting with digital assistants that help staff members navigate policies or perform routine internal tasks. I think what’s often missing though is the structure needed to support safe adoption.

Many agencies still don’t have full responsible-use policies or consistent guidance on when AI should or shouldn’t be used. Some don’t have formal approval pathways for evaluating new tools. Others lack risk assessment models that account for algorithmic behavior. Workforce training is sporadic and often limited to introductory demonstrations. In many cases, monitoring for drift, hallucinations, or accuracy degradation is not formally defined.

It feels like adoption is accelerating faster than the controls that would make adoption responsible and sustainable. The core question is whether agencies can scale these tools without compromising oversight.

It's worth noting that while OMB’s Memorandum M-25-21 provides a government-wide AI governance baseline, many agencies are still translating those requirements into enterprise policy. Similarly, while the Chief Data Officer Council offers a valuable structure for data governance, the consistent application of AI-specific model oversight, data-quality standards, and risk management practices remains a work in progress across federal agencies.

Legacy Data Weakens AI Outcomes

Every AI system depends on the quality of the data it consumes, and hygiene is an area where federal agencies face real challenges. In my experience, federal data environments are often fragmented, outdated, and inconsistent. Disparate systems hold separate slices of information and they don’t sync well. Incomplete data sets are a fact of life in most federal agencies, and some of the data that agencies house was never designed with modern modeling in mind. Privacy rules may also restrict the integration needed for comprehensive analysis, requiring data owners and privacy officers to partner closer.

If AI training relies on these fragmented landscapes, the results are going to reflect the weaknesses embedded within them. This leads to reduced accuracy and inconsistent outputs. Without serious investment in data modernization, I believe AI will fall short of its promised value.

A bot crawling mountains of broken data is destined to step on a loose rock, and the human user is the one who will take the tumble. Stonger data foundations will lead directly to stronger and more reliable AI outputs.

Ensuring Consistent and Fair Outcomes

One of my biggest concerns is how easily AI can replicate the gaps and inconsistencies that already exist in federal data. Government programs serve a wide range of communities and regions, but the underlying data doesn’t always reflect that range evenly. Some groups are well represented by data while others have sparse or inconsistent historical records due to reporting differences, program designs, or variations in digital access.

If these variations flow directly into training data, AI systems are likely going to produce uneven results. For example, a model might misclassify certain applicants for a public service simply because the underlying data is thin. Geographic areas, both rural and urban, with limited digital documentation might be deprioritized in automated analysis. And longstanding administrative patterns can become baked into algorithmic logic because AI treats historical data as the “truth,” when in reality historical data reflects how the government operated yesterday and not necessarily how it should be operating today.

This data reliability issue has practical consequences for mission execution. Agencies need structured assessments that examine how data was collected, who is represented, where gaps exist, and how automated systems might interact with those differences. The goal is to ensure that AI supported processes produce outcomes that are accurate, consistent, and aligned with program intent.

By understanding where historical inconsistencies exist, agencies can reduce the risk of uneven outcomes and make their automated systems far more reliable. Recognizing the limits of historical data helps prevent AI from carrying old patterns into new systems.

Workforce Preparedness Remains a Challenge

Most federal workers have not been trained in the fundamentals of AI literacy, which translates to a lack of understanding about how generative models create responses or how predictive systems classify information. Others don’t know how to review AI outputs or when human intervention is required. This uncertainty creates hesitation and sometimes fear. Employees worry about incorrect use or about inadvertently misinterpreting an AI action. Some also worry about how AI might affect job roles or expectations and are probably less inclined to use new tools.

I believe AI preparedness needs to start with cultural awareness. Federal employees need more than just tool training. They need to understand how to verify outputs, question assumptions, identify risks, and apply oversight in a way that strengthens professional judgment. This will provide assurance that AI is an enabler and not necessarily a replacement for human talent.

When people feel confident about supervising AI systems, agencies can adopt the technology much more safely. Employee confidence will determine whether AI accelerates the mission or complicates it.

Policy and Compliance Ambiguity

AI touches many parts of the federal regulatory and oversight ecosystem. Privacy rules, civil rights protections, FISMA, FedRAMP, records management, procurement rules, and program specific statutes all influence deployment. Many of these requirements were written well before AI even existed. This creates an understandable caution.

Many in the federal space hesitate because they don’t want to violate a rule that doesn’t apply neatly with AI and some fear that innovation might actually create compliance risk. I believe agencies need clearer governmentwide guidance, but they also need internal frameworks that give employees confidence and clarity.

Clear internal rules will empower innovation while safeguarding compliance in environments where critical mistakes matter.

Procurement Strategies Need to Evolve

Old school non-major acquisition processes were built for traditional software, hardware, and labor-based contracting. AI is different. It’s evolving at a rapid pace, constantly needs to be retrained, and requires continuous monitoring, transparency, and testing. These realities may not always map seamlessly into standard contract structures for the procurement of goods and services. Both pre-solicitation and post-award administration for AI need to be thought about differently than your typical IT procurement best practices.

From my perspective, agencies will need to create new acquisition strategies that account for the lifecycle of AI systems and set clear expectations for vendor transparency, model updates, performance measurement, and ongoing compliance with the constellation of federal regulations that have both a direct and tangential linkage to AI usage.

Successful application of AI capabilities will rely heavily on acquisition strategies that can match the fluidity of the technology’s lifecycle and oversight that can keep pace with its lightning speed evolution.

Public Trust Must Be Protected

When the federal government uses AI, the stakes are uniquely high. American citizens can’t opt out of interacting with government systems, and federal decisions influence rights, benefits, and national security. If an AI system produces errors or is perceived as unfair, the impact on public trust can be significant and long lasting.

I believe agencies need to operate with a higher standard of clarity and transparency. They should communicate how AI is used, how decisions are reviewed, and how individuals can raise concerns when something goes wrong. Trust is a critical requirement when utilizing AI in federal spaces where citizen data is being used and decisions that impact their daily lives are on the line.

Public confidence in the government’s use of AI will follow transparency, and nothing is more important for long-term adoption.

Readiness vs. Innovation

While AI adoption is expanding across government, the real measure of modernization isn’t how many tools agencies are deploying. It’s how ready they are to use those tools effectively and responsibly.

I believe AI readiness requires governance structures, ethical guidelines, workforce training, strong data foundations, consistent review practices, transparent communication, and acquisition strategies that reflect the reality of AI systems. When these elements are in place, AI will strengthen mission delivery and also enhance public confidence. If they are absent, AI will magnify risk and deepen existing challenges in federal agencies.

Next
Next

How Small Business Contractors Can Regain Momentum Post-Shutdown