Pro-Human Declaration: Humans Stay in Charge of AI

A bipartisan coalition unveiled the Pro-Human Declaration to keep humans in charge of advanced AI. Expect kill switches, bans on self-replication, and strict tests for kids.

Categorized in: AI News IT and Development
Published on: Mar 09, 2026
Pro-Human Declaration: Humans Stay in Charge of AI

Pro-Human Declaration: What IT and Development Teams Need to Know

Published Mar 8, 2026

A bipartisan coalition of experts, former officials, and public figures has released the Pro-Human Declaration-a framework meant to keep humans in charge of powerful AI systems, cap concentration of power, protect human experience, preserve individual liberty, and enforce accountability.

The backdrop matters. Public pushback against an unregulated race to superintelligence is growing, and the Pentagon-Anthropic standoff put governance gaps on full display. This declaration is a signal: expectation-setting for how high-impact AI should be built, tested, and controlled.

Why it matters for builders

The absence of clear rules isn't just a policy problem-it's an engineering problem. Without guardrails, teams ship risk by default: autonomous loops, self-modifying agents, and products that reach kids without safety evidence. The declaration offers a practical baseline you can implement now, even before formal regulation lands.

Core safeguards in the declaration

  • Pause on superintelligence development until there's scientific consensus that it can be done safely.
  • Mandatory off-switches on powerful systems, with human decision-makers retaining final authority.
  • Bans on architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.
  • Mandatory pre-deployment testing, especially for products targeting children, including checks for increased suicidal ideation and emotional manipulation risks.
  • Clear accountability for AI companies if harms occur.

Implementation notes for engineering leads

  • Off-switch and human-in-the-loop: Add circuit breakers at the policy and infrastructure layers (feature flags, traffic gates, model routing kills). Require human approval for high-risk actions, escalations, or capability unlocks.
  • Block self-replication/self-improvement: Disable write/exec permissions to model and agent code at runtime. Remove auto-update pathways for models/agents. Enforce signed releases, egress controls, and container sandboxes. No background schedulers that spin up new agents without explicit approval.
  • Pre-deployment testing (especially for minors): Run red-team evals for emotional manipulation, persuasion, and crisis content. Include targeted tests for suicidal ideation triggers and grooming patterns. Gate launch on predefined safety thresholds, not subjective review.
  • Auditability: Ship model cards, data/weights SBOM, capability maps, and incident response runbooks. Log prompts, tool calls, and high-risk decisions with retention policies and access controls.
  • Compute and access governance: Quotas and rate limits for agentic tools. Scoped credentials, short-lived tokens, and change freezes on high-risk pipelines. Separate staging from prod with independent approvals.

The players

  • Max Tegmark: MIT physicist and AI researcher who helped organize the effort.
  • Pete Hegseth: U.S. Defense Secretary who designated Anthropic a "supply chain risk" after the company declined unlimited government use of its tech.
  • Steve Bannon: Former Trump advisor, endorsing the declaration.
  • Susan Rice: Former U.S. National Security Advisor and Policy Advisor for President Obama, endorsing the declaration.
  • Mike Mullen: Former Joint Chiefs Chairman, signatory.

What they're saying

"There's something quite remarkable that has happened in America just in the last four months. Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence." - Max Tegmark (TechCrunch)

"This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems." - Dean Ball, Senior Fellow, Foundation for American Innovation (The New York Times)

What's next

The coalition is prioritizing mandatory pre-release testing for AI products-especially those for children-as the first principle to gain traction. Expect procurement and compliance teams to start asking for test evidence, red-team results, and shutdown procedures as table stakes.

Action checklist (start this week)

  • Inventory agent capabilities: self-replication, autonomous optimization loops, code-writing with execution, and unrestricted tool use. Disable or gate any high-risk path.
  • Implement a real kill switch: feature flag + traffic kill + credentials revocation for AI services, tested in drills.
  • Add mandatory pre-deploy safety testing to CI/CD with clear pass/fail thresholds; block release on failure.
  • Ship documentation: model card, eval report, SBOM, incident playbook, and a contact for escalation.
  • Update vendor clauses: require safety evals, shutdown support, and no self-replicating or self-modifying architectures.
  • If your product reaches minors: run specialized harm testing and obtain clinical review for crisis content flows.

Helpful references

The takeaway

The Pro-Human Declaration sets a clear bar: humans stay in charge, no self-replicating or self-improving systems, and real testing before release-especially for kids. For IT and dev teams, that translates into concrete work: kill switches, gated autonomy, evals in CI/CD, and auditable artifacts. Start there, and you'll be ready as policy catches up.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)