AI Is Drafting Federal Rules. Here's What Government Leaders Need To Do Next
ProPublica reported that the Department of Transportation plans to use Google Gemini to draft large portions of federal transportation rules-claims of 80-90% automation and a 20-minute path to a first draft. Internally, the stated goal sounds like volume and speed, even if output quality varies. That posture invites legal risk and public backlash, especially for safety-critical policy.
Why this matters
Notice-and-comment rulemaking under the Administrative Procedure Act demands clear proposals, considered responses to public input, and a reasoned final rule. Courts review rules for whether they are "arbitrary" or "capricious," including gaps in logic, reliance on irrelevant factors, or explanations that conflict with the record. If AI-generated drafts carry factual mistakes or thin analysis, expect challenges and delays that erase any time saved on the front end.
For reference, see the APA's judicial review standard at 5 U.S.C. ยง 706.
Where AI helps-and where it breaks
LLMs can structure first drafts, summarize precedent, and propose alternatives quickly. They also hallucinate, echo biased prompts, and struggle with long, technical records. On complex safety, economic, or environmental rules, small errors compound. Treating AI as the decider, with humans as proofreaders, is a quick path to weak records and avoidable litigation.
The legal risk in plain terms
An agency must consider relevant factors, explain its reasoning, respond to comments, and defend any models used. Simply rubber-stamping an LLM's output risks violating those duties. Courts have long insisted that ultimate responsibility stays with the agency-not a tool. If your rule relies on AI, your record must show independent human judgment, fact-checking, and a coherent rationale tied to the evidence.
Practical guardrails for agencies
- Define the role: Use LLMs for drafting assistance, synthesis, and option generation-not for final policy choices.
- Human-in-the-loop gates: Require named officials to sign off at each stage with accountability memos that explain the reasoning in human terms.
- Provenance logs: Record model version, prompts, key inputs, output edits, and validation steps. Preserve this as part of the administrative record.
- Strict validation: Run cite-checking, data verification, red-teaming for bias and hallucinations, and cross-agency expert review on any AI-assisted section.
- Model defense: If a model informed analysis, provide a clear, "full analytical defense" of that use and its limits.
- Comment responsiveness: Avoid generic or AI-written responses. Address material comments directly with evidence and reasoning.
- Scope choice: Pilot AI on low-risk guidance, summaries, or FAQs before touching high-stakes safety rules.
- Disclosure: State when and how AI was used in the preamble. This builds trust and preempts discovery fights.
- Security and privacy: Lock down prompts and outputs, scrub sensitive data, and set procurement clauses for data handling and model updates.
- Litigation readiness: Assume challengers will look for internal inconsistency, factual errors, or signs of prejudgment. Fix those in draft, not in court.
What to watch
Expect a wave of AI-assisted proposals. Internally, watch for shortcuts that trade rigor for speed. Externally, advocates and courts will scrutinize records for reasoning gaps, biased assumptions, and non-responsiveness to major comments.
If the strategy is "flood the zone," the counter is simple: build rules that can stand up in court. That means slow down where it counts, document your judgment, and keep AI in a support role.
Actions you can take this quarter
- Issue an internal AI-in-rulemaking policy with approval gates, disclosure rules, and validation checklists.
- Stand up an audit team to review AI-assisted drafts for factual accuracy, traceability, and consistency with the record.
- Train counsel, economists, and policy staff on prompt discipline, verification workflows, and how to write clear, human explanations.
- Select two low-risk rulemakings to pilot these controls before applying them to critical safety or infrastructure rules.
Resources
- ProPublica coverage of agency AI plans: propublica.org
- APA judicial review standard: 5 U.S.C. ยง 706
- If your team needs structured upskilling, see AI courses by job at Complete AI Training
Your membership also unlocks: