How AI-Generated Misinformation Threatens Trust in US Government Records and Democracy
The Make America Healthy Again report contained AI-generated fake sources, raising concerns about the accuracy of US government records. This threatens trust in policymaking and democracy.

The Make America Healthy Again Report Reveals How AI Can Undermine the US Official Record and Democracy
In May, the White House’s Make America Healthy Again commission released the Make Our Children Healthy Again report. Soon after, the report was found to contain numerous references to non-existent sources—classic signs that parts of it were generated by artificial intelligence tools. This raises serious concerns about the integrity of the US Official Record, the official documentation created by government operations. If AI-generated content with inaccurate or fabricated data becomes common in government reports, it could erode trust in evidence-based policymaking and even threaten democratic processes.
Educators and professionals are increasingly familiar with writing that looks credible at first glance but quickly reveals inconsistencies or questionable sources. This often points to content produced by generative AI tools, which can produce convincing sentences, paragraphs, or even references that don’t actually exist. Sometimes, a student or writer may input their draft into an AI tool to "improve" it, mixing original and AI-generated text in ways that blur the lines of authorship and accuracy.
AI in the Spotlight Beyond Education
Concerns about AI-generated content aren’t limited to classrooms. On May 22nd, the White House’s MAHA commission, led by Secretary of Health and Human Services Robert F. Kennedy Jr., released the contested report. Investigations uncovered dozens of fabricated references, some blending real information into fake citations—a common AI hallucination. The White House dismissed these as mere formatting errors but later issued a revised version removing the non-existent studies.
This issue reflects a broader problem: people and organizations relying on AI to shortcut the hard work of research and writing. For instance, in journalism, a reporter for Wyoming’s Cody Enterprise was caught fabricating quotes last year, including some falsely attributed to the state’s governor. Media outlets that have adopted AI tools have faced public backlash after publishing articles with factual errors or fake author names. CNN highlighted how over-reliance on AI can damage news credibility and fuel misinformation.
AI’s Impact on Legal and Academic Fields
The legal sector is facing similar challenges. Damien Charlotin, a data consultant and lecturer at HEC Paris, is compiling a database with 168 legal cases involving AI-generated hallucinations, including fake citations and fabricated arguments. These cases span multiple countries, including the US, UK, Israel, and Canada. The UK High Court recently reprimanded lawyers for submitting fictitious case law, underscoring how AI errors can affect the justice system.
In academia, experts warn that AI might soon write entire research papers reviewed by other AI systems. This cycle risks contaminating the integrity of scientific publishing and peer review.
What This Means for the US Official Record
The US Official Record includes all materials produced by government operations—policy reports, agency statements, Congressional records, and more. It serves as a crucial source for understanding government actions and policymaker intentions. However, the MAHA report and other AI-related incidents highlight the fragility of trust in these records.
Government transparency and democracy rely on a shared understanding of what counts as valid data and honest record-keeping. While the Official Record is never perfect and always subject to interpretation or political use, maintaining accuracy is essential. The sloppy referencing in the MAHA report suggests a disturbing trend: outsourcing research and writing to AI without proper verification.
If this pattern spreads, it risks undermining evidence-based policy development and weakening democratic accountability. Trust in official documents is vital, and misplaced faith in AI tools threatens that trust.
Conclusion
AI tools have clear benefits, but unchecked use in producing official government content can lead to misinformation and damaged credibility. Writers working with or about AI should be vigilant about verifying facts and sources, especially when government transparency and public trust are at stake.
For those interested in practical skills to use AI responsibly, exploring comprehensive training can help ensure AI serves as a tool, not a crutch. Learn more about ethical AI use and content creation at Complete AI Training.