Government by AI? DOT Moves To Draft Rules With Gemini
The U.S. Department of Transportation is preparing to use large language models to draft federal transportation regulations. Internal demos pitched AI as a way to speed the process from idea to Office of Information and Regulatory Affairs review in 30 days, with a draft ready in about 20 minutes.
At a leadership meeting, General Counsel Gregory Zerzan framed the priority as volume: "We don't need the perfect rule on XYZ⌠We want good enough." He described DOT as the "point of the spear" for a broader federal push.
Staff reactions were split. Some saw a faster path to produce the routine text that clogs preambles. Others raised alarms about safety stakes, litigation risk, and the well-known tendency of AI systems to generate errors with confidence.
What's happening inside DOT
In a December demonstration for more than 100 employees, a presenter said Gemini can handle 80%-90% of rule drafting, with staff finishing the rest. A live prompt produced an NPRM-like document-but one attendee noted it lacked the precise amendments to the Code of Federal Regulations.
Leaders have argued the goal is speed over polish, signalling a plan to "flood the zone" with drafts. The department has already used AI to draft an unpublished FAA rule, according to a staff brief.
Elsewhere in government, AI boosters are pushing for "fast adoption," with some officials predicting human roles will shift to oversight of "AI-to-AI interactions." At the same time, former DOT AI leaders and external experts warn that offloading key tasks could undermine reasoned decision-making under the Administrative Procedure Act and increase the odds of harmful mistakes.
Context you should know
The administration has encouraged federal AI use through executive actions and OMB guidance, though prior documents stopped short of explicitly endorsing AI-written rules. A separate push from the Department of Government Efficiency reportedly proposed using AI to help cut half of all regulations and automate drafting for attorneys to edit.
Workforce cuts have thinned subject-matter expertise across agencies, including DOT attorneys. Critics argue that fewer domain experts plus more automated drafting is a risky pairing for safety-critical policy.
Operational takeaways for government, managers, and writers
If you lead policy or legal teams
- Define AI's role: assistant for boilerplate and summaries, not the source of binding CFR text or policy judgments.
- Require source-grounding: every AI-generated sentence must trace back to statutes, case law, prior rulemakings, data, or expert analysis.
- Adopt the NIST AI Risk Management Framework for governance, testing, and monitoring. NIST AI RMF
- Set decision thresholds: what requires human authorship, SME sign-off, and attorney certification before OIRA submission.
- Be transparent: document where and how AI was used in the rulemaking record to withstand judicial review under the APA.
- Protect the record: log prompts, model versions, retrieval sources, and human edits to create an auditable chain of reasoning.
- Security and privacy: use FedRAMP-authorized environments, disable training on sensitive prompts, and sanitize data inputs.
- Labor relations: clarify roles, training, and performance standards to prevent morale issues and shadow processes.
- Plan for FOIA: assume prompts and outputs could be requested; write and store accordingly.
- Coordinate with OIRA early, including an AI-use statement in pre-brief materials. About OIRA and the regulatory process
A safe workflow for AI-assisted NPRMs
- Scope: Write a focused problem statement, legal authority, and objectives. Humans own this.
- Grounding pack: Assemble statutes, prior rules, relevant case law, and datasets. Use retrieval tools to limit model outputs to approved sources.
- Templates: Lock standardized NPRM structures and CFR amendment formats. Models write inside those guardrails.
- Generation: Allow AI to draft preamble sections (background, alternatives considered, small entity analysis), keeping binding text off-limits at first.
- Binding text: Attorneys and SMEs draft the CFR amendments; AI can propose only if every clause is source-cited and validated.
- Validation: Run hallucination checks, citation audits, and red-team reviews (safety, equity, and economic impacts).
- Sign-offs: SMEs, economists, and counsel certify reasoning and references. Capture human overrides and justifications.
- OIRA package: Include the grounding pack, AI-use memo, validation results, and a plain-language summary of key choices.
Quality and risk checks (use before any submission)
- Source fidelity: Every factual claim and policy justification has a verifiable reference.
- Completeness: Alternatives, costs, benefits, and small entity impacts are considered, not glossed over.
- Safety first: For aviation, pipelines, rail, and hazmat, require an elevated review path and independent SME sign-off.
- Equity and accessibility: Confirm meaningful consideration and plain-language standards.
- Consistency: Proposed CFR text aligns with preamble reasoning and legal authority.
- Change log: Track what the model drafted vs. what humans revised, with timestamped rationale.
Skills to build across teams
- Prompt frameworks that force citations, counterarguments, and uncertainty flags.
- Retrieval-augmented drafting: constraining outputs to approved rulemaking records.
- Legal and economic validation checklists for APA and EO requirements.
- Model behavior testing and red-teaming for safety-critical scenarios.
If you need structured upskilling paths for policy, legal, and writing roles, review these programs: AI courses by job.
Bottom line
Speed is useful. Speed without strong controls invites weak rules, legal challenges, and real-world harm.
If agencies adopt AI for rulemaking, keep it where it helps most: drafting nonbinding sections, summarizing records, and generating structured options. Put humans in charge of the hard parts-authority, reasoning, and safety-and document every step like the courts will read it, because they will.
Your membership also unlocks: