How Flawed AI Prompts Drove Costly Mistakes in Trump-Era Veterans Health Contract Cuts
An AI used to cut VA health contracts flagged essential services as cancelable due to vague criteria and limited contract analysis. Experts warn human oversight is crucial to prevent costly errors.

Inside the Trump Administration’s AI Use to Slash Veterans’ Health Contracts
In early 2025, an AI system developed by the Department of Government Efficiency (DOGE) was deployed to review and recommend cancellations of contracts related to Veterans Affairs (VA) services. The AI flagged many contracts as “munchable” — a term used internally to mean cancelable — but close examination reveals serious flaws in how the system was designed and operated.
The AI was instructed to cancel contracts that didn’t “directly support patient care.” However, the instructions given were vague and conflicted, and neither the AI nor its creator, software engineer Sahil Lavingia, had the expertise needed to make nuanced decisions about healthcare contracts. This led to errors, including incorrectly labeling essential contracts as unnecessary.
Flawed AI Prompts and Model Limitations
The system prompt guiding the AI combined contract analysis with cancellation criteria, mixing relevant and irrelevant instructions. For example, it told the AI to describe contracts fully, but also to flag “soft services” like administrative consulting and data management as “munchable.” Experts warn that adding unrelated directives can confuse AI models, reducing accuracy.
Additionally, the AI analyzed only the first 10,000 characters of each contract, missing important details in longer documents. It relied on outdated AI models with limited input capacity, despite newer models supporting much larger inputs. This shortcoming increased the risk of misclassification.
Contract Value Errors and Data Limitations
The AI frequently hallucinated contract amounts, assigning a standard $34 million value to over a thousand contracts regardless of their actual worth, which could be thousands instead. Instead of using accurate, publicly available financial data, the AI extracted numbers from contract texts—sometimes pulling irrelevant figures.
Experts describe this as a “lazy” approach that sacrifices accuracy for speed. While VA staff were supposed to vet AI outputs before final decisions, the initial errors still raise concerns about relying on AI for such critical tasks without proper safeguards.
Vague Criteria for Cancellation Decisions
The prompts lacked clear definitions for key terms such as “core medical services,” “necessary consultants,” and what truly constitutes “direct patient care.” The AI was asked to distinguish between medical/clinical and psychosocial support services without detailed guidance, forcing it to make judgment calls beyond its capability.
For instance, contracts related to maintenance of critical safety equipment like ceiling lifts—essential for patient and staff safety—were sometimes flagged as cancelable, despite their clear importance. This points to the AI’s limited understanding of healthcare operations.
Examples of What AI Flagged as Munchable
- Healthcare technology management
- Data management and analytics
- Administrative consulting
- Case management administrative services
- Diversity, equity, and inclusion (DEI) initiatives
- Recruitment services
- Backup administrative roles
- Contract extensions not directly tied to patient care
Despite AI instructions to exclude critical audits and compliance reviews from cancellation, many such contracts were still flagged as “soft services” and recommended for termination. In one case, the AI acknowledged the importance of compliance but still suggested canceling the contract, calling it an administrative support function rather than direct patient care.
Direct Patient Care Defined but Ambiguous
The AI was told that direct patient care includes physical exams, medical procedures, medication administration, maintenance of critical equipment, and essential therapeutic services. However, the criteria left many gray areas, such as how to judge “proven efficacy” or “reasonable pricing,” especially since the AI only viewed partial contract texts.
Experts doubt that AI, without specialized training or complete data, can accurately assess whether a contract is fairly priced or truly essential. The lack of context and domain knowledge was a major limitation.
Assumptions About Insourcing and Contract Waste
The AI was also guided to flag services that “could be easily insourced” by VA staff, such as video production, customer support, basic IT, and event planning. Yet experts caution that these assumptions are flawed: contracting for some services can be more cost-effective, and forcing insourcing could divert staff away from core patient care responsibilities.
Lavingia admitted that some flagged contracts might be better kept external, but a hiring freeze limited the VA’s ability to insource effectively.
The VA’s AI Ambitions and Future Plans
Despite these challenges, the VA defends its use of AI in contract review as a “commonsense precedent.” Internal communications reveal ambitions to expand AI and automation to speed up disability claims processing, aiming to reduce decision times dramatically.
While AI can assist with high-volume tasks, this case highlights the risks of deploying artificial intelligence without sufficient domain expertise, clear instructions, and thorough human oversight—especially in areas as critical as veterans’ healthcare.
Final Thoughts for Government and Management Professionals
This example underscores the need for careful planning when integrating AI into government operations. Clear, precise guidelines and domain knowledge are essential to avoid costly mistakes. Human oversight remains vital to catch AI errors and ensure decisions align with agency missions.
For those managing AI projects within government or HR, this case is a reminder: AI tools can support efficiency but should never replace informed human judgment, especially where lives and public trust are at stake.
Learn more about responsible AI deployment and training at Complete AI Training.