Congress Weighs AI’s Promise and Peril for Federal Workers Amid Security Concerns and Global Competition
Congress reviews AI use in government amid privacy and security concerns. Challenges include outdated systems, procurement barriers, and the need for better workforce training.

Congress Reviews AI Use in Government: Opportunities and Risks
Federal agencies are integrating AI tools to boost efficiency, but concerns about privacy and security remain prominent. On June 5, the House Oversight Committee held a hearing to evaluate how AI is currently used in government and the challenges that hinder its wider adoption.
Current AI Applications and Procurement Challenges
Several departments, such as Health and Human Services and Veterans Affairs, use AI for medical research, outbreak tracking, and analyzing health records. However, outdated technology and strict procurement rules limit broader AI deployment. Bhavin Shah, CEO of Moveworks, emphasized that federal employees deserve rapid access to AI tools similar to those in the private sector. He pointed out that achieving FedRAMP security certification took his company over three years and $8.5 million, creating barriers especially for smaller innovators.
Global Competition and Innovation Barriers
Ylli Bajraktari, president of the Special Competitive Studies Project, warned that slow AI adoption is a key disadvantage for the U.S. compared to countries like China. He cited bureaucratic inertia, legacy IT systems, and limited AI literacy among federal workers as main obstacles. His recommendations included:
- Creating a dedicated AI council at the White House similar to a "space race" initiative
- Increasing non-defense AI R&D funding to $32 billion
- Developing a targeted AI talent strategy to boost workforce skills and attract global STEM experts
- Reforming procurement processes to accelerate AI integration
- Strengthening international alliances in AI and cybersecurity
Security and Privacy Concerns Highlighted
Bruce Schneier, a security expert at Harvard Kennedy School, raised alarms about the risks of unchecked AI use in government. He pointed to recent controversies involving Elon Musk and the Department of Government Efficiency (DOGE), which reportedly used unauthorized AI systems on sensitive federal datasets. Schneier warned that such practices expose Americans' data to potential adversaries and compromise national security.
These issues sparked intense debate during the hearing. Republicans blocked a Democratic effort to subpoena Musk for testimony, despite concerns about his management of AI tools and access to sensitive data. Rep. Stephen Lynch criticized Musk’s actions as harmful to government integrity and public safety.
Practical Steps for Safer AI Adoption
Linda Miller, founder of fraud detection platform TrackLight, cautioned against expecting rapid AI overhauls in government IT systems. She suggested focusing on automating routine tasks to free federal workers for higher-level duties. Miller recommended establishing “regulatory sandboxes”—controlled environments where AI systems can be tested under supervision before wider deployment.
This approach balances innovation with security and allows government agencies to explore AI’s potential without exposing sensitive data prematurely. It also acknowledges the slow pace of change in legacy systems and federal acquisition processes.
Moving Forward in AI for Government
Federal workers stand to benefit significantly from AI-powered tools that improve efficiency and decision-making. However, careful attention to procurement barriers, workforce training, and security protocols is essential. Congress faces the challenge of enabling AI adoption while protecting privacy and maintaining public trust.
For government professionals interested in expanding their AI skills to better navigate these changes, resources like Complete AI Training’s government-focused courses offer practical learning options tailored to the public sector.