DOT Plans To Use Google's Gemini To Draft Federal Rules - A Brief For Writers
The Department of Transportation and its agencies - NHTSA, FAA, FHWA, FMCSA, FRA, and FTA - are moving to let AI help write regulations, according to a ProPublica investigation. If you drive, fly, or take a train, this touches your day-to-day life.
For writers, editors, and policy pros, that means the writing stack inside government is changing fast. The work won't disappear, but the shape of the job will.
What's actually happening
Per ProPublica, DOT's general counsel Gregory Zerzan told agency leaders the department aims to be "the first agency that is fully enabled to use AI to draft rules," using Google's Gemini. Internal messages framed it as a major shift in how rulemakings get written, with attorney Daniel Cohen saying the system would help staff work "better and faster."
Speed is the goal. Staff were told Gemini could handle 80%-90% of the writing, with humans finishing the rest. Zerzan added: "We don't need the perfect rule on XYZ. We don't even need a very good rule on XYZ. We want good enough. We're flooding the zone." He wants draft rules in roughly 20 minutes and a draft ready for OIRA review in 30 days.
Not everyone's convinced. ProPublica reports internal pushback over accuracy and reliability, and staff said an unpublished FAA rule was already drafted with AI.
Why this matters for writers
If you write policy, technical standards, compliance docs, or news, AI is becoming a first pass - not the final word. The bottleneck shifts from drafting to verification, sourcing, and accountability.
Your value moves to judgment: what's missing, what's wrong, what's risky, and what stands up to legal, public, and media scrutiny. The writers who win will run tight review loops and leave an audit trail.
A practical playbook you can use now
- Force citations. Require sources for claims, numbers, and definitions. No source, no publish.
- Draft-then-verify. Let AI produce structure and boilerplate; humans do analysis, stakeholder impacts, and legal references.
- Provenance tracking. Save prompts, versions, and sources so you can show how a line made it into the draft.
- Red-team your prompts. Stress-test for hallucinations, missing stakeholders, and misleading summaries before real use.
- Policy guardrails. Add banned claims, checklist thresholds (e.g., data sources, CFR/USC cites), and required SME sign-offs.
- Bias and impact checks. Document fairness reviews and potential disparate impacts for anything that affects the public.
- Security hygiene. Don't paste sensitive or predecisional info into third-party tools without approval.
- Timeboxing ≠ shortcuts. Fast drafts still get SME, legal, and copy edits. Put those gates on the calendar.
- Correction drills. Rehearse worst-case errors and your public correction protocol.
- Transparent attribution. Note AI assistance in internal docs; keep human accountability on the final byline.
If you're covering this story
- Governance: Who approves prompts, models, and updates? What are the audit and recusal rules?
- Model choice: Why Gemini? What are known error rates on legal/policy tasks compared to alternatives?
- Vendor risk: What happens if access changes, costs rise, or output shifts after a model update?
- Process claims: 20-minute drafts and 30-day turnarounds - what steps get compressed or skipped?
- Oversight: OIRA's role, public comment integrity, FOIA exposure, and record-keeping of AI outputs.
- Workforce: Impact on federal writers and editors; training, reskilling, and union perspectives.
Sources and further reading
ProPublica's reporting is the spark here. Start there, then review vendor materials with caution and independent verification.
Keep your edge
If you cover public-sector AI or write inside government, build context on policy, procurement, and risk frameworks.
Bottom line: AI can make first drafts cheap and fast. Writers who stay in demand are the ones who make those drafts accurate, defensible, and useful.
Your membership also unlocks: