Federal Government Faces Vendor Lock-In and Oversight Gaps in AI Procurement
Federal agencies are adopting artificial intelligence tools through White House deals offering access for as little as $0.42 per use, but a ProPublica investigation reveals significant risks to government data and operations.
The report, published April 6, 2026, draws lessons from the federal shift to cloud computing. Microsoft's $150 million security commitment and gaps in FedRAMP-the program that certifies cloud services for government use-offer a roadmap of what can go wrong as agencies move to Generative AI and LLM systems.
Three Core Problems
Vendor lock-in through discounted pricing: Deeply discounted AI tools can trap agencies into long-term dependence on a single vendor, making it difficult and costly to switch providers later. This mirrors problems the government encountered with cloud services.
Understaffed oversight: FedRAMP lacks the personnel to rigorously evaluate AI services before agencies deploy them. The program struggles to keep pace with the volume and complexity of new tools entering federal systems.
Conflicted assessors: Third-party vendors hired to assess AI security are often paid by the same companies whose products they're evaluating. This arrangement creates financial incentives that can compromise independent review and put sensitive federal data at risk.
What This Means for Government Workers
AI for Government professionals need to understand procurement mechanics and security frameworks. The investigation suggests that agencies adopting AI without addressing these structural problems may face operational disruptions, security breaches, or costly vendor migrations down the road.
The findings underscore a basic tension: discounted pricing accelerates adoption, but speed without oversight creates exposure.
Your membership also unlocks: