How Generative AI Is Changing Government Jobs—And Why It’s Far From Ready
Generative AI is automating routine government tasks, but struggles with complex legal work and accuracy. Experts warn against overreliance as AI remains immature for high-stakes decisions.

Is AI Running the Government? What We Know So Far
Generative AI is being introduced to automate tasks previously handled by government employees, with estimates suggesting up to 300,000 federal job cuts by year-end. Agencies like the General Services Administration (GSA) and Social Security Administration have adopted AI tools resembling ChatGPT to assist employees, while the Department of Veterans Affairs uses generative AI for coding.
Meanwhile, the U.S. Army has deployed CamoGPT, an AI tool to review documents and remove references to diversity, equity, and inclusion. The Department of Education plans to implement AI chatbots to answer student and family questions about financial aid and loan repayment. Despite these deployments, the technology remains immature and not yet reliable for complex government functions.
Current AI Roles in Government
Government AI chatbots mainly support routine tasks like drafting emails and summarizing documents. Agencies are exploring expanded uses, such as applying AI in procurement—the process where the government contracts private companies for goods and services. Procurement involves legal negotiations ensuring compliance with regulations like the Americans with Disabilities Act and transparency requirements.
However, AI’s ability to speed up procurement is questionable. Legal experts warn that AI-generated contract language can introduce errors and create more work. Since contracts require precise, vetted language, lawyers often find AI-generated terms unreliable and time-consuming to review. In many cases, copying and pasting existing text remains more efficient.
Moreover, AI is prone to inaccuracies in legal reasoning. A 2024 study found that AI chatbots designed for legal research made errors 17% to 33% of the time. This raises concerns about their use in high-stakes government decisions.
Common AI Mistakes in Legal Contexts
- Generating nonexistent legal cases, leading to sanctions against lawyers who cited them.
- Claiming unlikely legal outcomes, such as a state supreme court overruling the U.S. Supreme Court.
- Confusing a litigant's argument with the court's ruling.
- Citing laws that have been overturned or misinterpreting the prompt itself, such as fabricating rulings for fictional judges.
Legal systems change frequently; cases get overruled and laws repealed, making static AI models prone to outdated or incorrect information. For example, when asked if the U.S. Constitution guarantees abortion rights, an AI might cite Roe v. Wade and Planned Parenthood v. Casey—cases now overturned by Dobbs v. Jackson Women’s Health Organization, leading to incorrect answers.
Tax laws also feature ambiguity, with courts often disagreeing on interpretations. This complexity challenges AI’s ability to provide clear legal guidance.
Is AI Handling Your Taxes?
The IRS does not currently offer a generative AI chatbot for public use, though a 2024 IRS report recommends investing in AI tools to assist taxpayers. Pilot programs, like one in Pennsylvania partnering with OpenAI, have shown AI can save government workers substantial time on administrative tasks, averaging 95 minutes saved daily.
However, the rapid rollout of AI tools in government often lacks careful integration. Agencies have deployed AI widely without tailoring it to specific workflows or ensuring reliability. Experts caution that this approach risks inefficiency and mistakes instead of genuine productivity gains.
Studies on government chatbots suggest they should include clear disclaimers stating users are interacting with AI, not humans. Their responses must clarify that outputs are not legally binding, so users understand AI-generated advice does not replace official guidance.
Clear responsibility lines within agencies are needed for chatbot development and updates. Often, AI developers work separately from legal or policy experts, complicating maintenance and accuracy when government policies change.
The Bottom Line on AI in Government
Generative AI is still early-stage technology. While it shows promise for automating routine tasks, it struggles with complex legal reasoning and high-stakes decision-making. Governments and tech companies are exploring potential use cases, but widespread reliable deployment remains uncertain.
For those working in government, legal, and writing roles, it’s crucial to approach AI tools with caution. Understanding their limitations helps prevent overreliance on AI where human expertise remains essential.
For practical guidance on integrating AI tools effectively and responsibly, consider exploring specialized training at Complete AI Training.