House Bill Targets Deepfake Distribution, Protects AI Whistleblowers
Rep. Ted Lieu, D-Calif., introduced a bipartisan artificial intelligence bill that would impose stricter penalties for distributing deepfake and non-consensual images while making it easier for whistleblowers to report AI-related safety concerns.
The bill stems from recommendations issued by the bipartisan House Task Force on AI, which Lieu led alongside Rep. Jay Obernolte, R-Calif. Obernolte backs the measure and is developing his own separate AI package expected later this year.
What the Bill Covers
- Stricter penalties for distributing deepfake images
- Protections for whistleblowers reporting AI safety risks or violations
- Requirements for U.S. participation in international organizations developing technical standards for AI
- A prize competition for AI research and development
Lieu said the bill avoids more contentious debates. "It is not designed to be controversial," he said. "It is based on bipartisan legislation that other members have introduced, as well as the recommendations of the bipartisan House AI Task Force."
What It Doesn't Address
The bill sidesteps several thorny policy questions: whether a federal standard should override state AI laws, and whether AI systems used in critical infrastructure and education need mandatory testing requirements.
Those issues remain areas of disagreement among lawmakers. By focusing on deepfake penalties, whistleblower protections, and international standards, Lieu's bill targets areas with broader consensus.
For professionals in government working on AI policy, understanding AI for Government and the specific risks posed by Generative AI and LLM systems can provide essential context for these emerging regulations.
Your membership also unlocks: