OSTP's Misguided Effort to Deregulate AI
OSTP asked the public which regulations are "getting in the way" of AI adoption. With real harms growing and public trust shrinking, that's the wrong question. The mandate of the office is to deliver scientific advice, not a deregulatory wish list.
What the Public Told Them
The response was clear: more oversight, not less. Educators called for human-in-the-loop safeguards and strong privacy protections. Computing professionals flagged risks of deepfake pornography, AI-driven scams, wrongful arrests, autonomous weapons and mass privacy intrusions - and pushed for enforceable, tiered governance with broad stakeholder input.
Civil rights groups warned about algorithmic redlining in housing and lending, demanding fairness, transparency and accountability. Many individuals pressed for data rights: ownership, control and enforceable consent over personal information.
OSTP's Mission - And Its Track Record
Congress created OSTP to give the president "accurate, relevant, and timely" scientific advice. That means evidence first, conclusions second. Past OSTPs did exactly that: national tech policy under one administration, early EV and nanotech work under another, and the first US AI policy framework with the 2016 White House reports on AI and the economy.
Subsequent efforts built research institutes, set federal AI use guidance, and backed the OECD AI Principles. The through-line: promote innovation while building safeguards. The recent RFI cuts against that history.
Gold Standard Science vs. Today's AI
"Gold Standard Science" is a solid north star: reproducibility, transparency, uncertainty quantification, conflict-of-interest controls. Most commercial models fail that test. They're black boxes with unclear error rates, trained on questionable data with opaque provenance, and shipped with marketing claims rather than validated evidence.
We've already seen pseudo-scientific scoring tools worsen bias in credit and risk predictions. Demanding gold standards from agencies while loosening guardrails for industry is incoherent policy.
Public Opinion and Expert Consensus
Surveys show Americans want stronger rules for safety and data security, even if progress slows. Majorities of both the public and AI experts prefer more control over how AI touches their lives, and worry more about weak regulation than excess. See recent polling from Pew Research.
AI pioneers like Geoffrey Hinton, Yoshua Bengio and Stuart Russell have urged tighter safeguards. Economists like Daron Acemoglu point out that without rules, benefits will concentrate while costs spread. This is exactly when OSTP should lean on science, not vibes.
What OSTP Should Do Now
- Withdraw the RFI and relaunch a process that centers evidence, not deregulation.
- Convene participatory forums across civil rights, safety, security, labor, education, health and industry.
- Commission expert reviews on actual harms, incident data and failure modes in deployed systems.
- Work with Congress on updated laws grounded in Gold Standard Science: risk tiers, testing, auditability and enforceable rights.
What Good Governance Looks Like (Practical)
- Risk-tiered rules: higher risk, tougher obligations (pre-deployment testing, third-party audits, incident reporting, recall authority).
- Evidence standards: documented datasets, model cards, uncertainty and error disclosures, reproducible evaluations.
- Human oversight: meaningful intervention and override in critical decisions (health, finance, employment, housing, benefits, justice).
- Data rights: clear consent, opt-out, data minimization, purpose limits and enforceable deletion.
- Safety-by-default for minors: content filters, deepfake protections, age-appropriate design and strict advertising limits.
- Security: supply chain integrity, red teaming, abuse testing, watermarking/traceability for synthetic media.
- Accountability: duty of care, documentation, audit logs, clear liability for foreseeable harms.
Immediate Actions for Legal, Science and Research Teams
- Map your AI use: purpose, data sources, model type, decision impact and affected rights.
- Run impact assessments (bias, privacy, safety) before deployment; repeat on updates.
- Stand up auditability: training data lineage, evaluation protocols, versioned configs and incident logs.
- Set acceptable use policies and red-team plans; publish known limitations and error rates.
- Update contracts: data rights, security controls, audit access, termination on safety failures.
- Train staff on model limits, escalation paths and user communication for high-stakes contexts.
If your organization needs structured upskilling to build these guardrails, explore focused programs in AI policy, risk and audit at Complete AI Training.
The Right Question
The task is not "how do we remove AI safeguards." It's how we ensure AI systems meet scientific standards, respect rights and produce verifiable public benefit. That aligns with OSTP's mandate and with where the public, science and bipartisan policy are already pointing.
Your membership also unlocks: