Leading Experts Call to Pause AI Development Beyond Human Intelligence
Wednesday, 22 October 2025, 19:48
More than 850 figures across science, tech, and public life have called for a pause on AI systems that could exceed human cognitive abilities. Their message is blunt: unchecked pursuit of "superintelligence" brings unacceptable risks to security, governance, and our long-term future.
Their position echoes earlier warnings from the AI safety community and adds fresh urgency. For engineering leaders and developers, this isn't a think-piece-it's a signal to update risk models, deployment gates, and team practices now.
The appeal and who signed it
The signatories span Big Tech pioneers, top AI researchers, military and policy veterans, and public figures. Highlights include:
- Steve Wozniak (Apple co-founder)
- Richard Branson (Virgin Group)
- Yoshua Bengio, Geoffrey Hinton, Stuart Russell (AI researchers)
- Mike Mullen, Susan Rice
- Steve Bannon, Glenn Beck
- Prince Harry and Meghan Markle
- Mary Robinson (former President of Ireland)
The group urges a complete halt on developing systems intended to exceed human performance on most intellectual tasks-until there is scientific consensus on safety and broad public support.
Why they want a pause
- Loss of control over autonomous capabilities and decision loops
- National security exposure and strategic instability
- Economic dislocation and widened inequality
- Extreme downside risk, up to civilizational failure
Yoshua Bengio warns that systems surpassing human capability may arrive sooner than many expect. His stance: build AI that cannot harm people and include the public in decisions that set the guardrails for civilization-scale tech.
The split: promise vs. risk
Within the community, the debate is sharp. Some argue superintelligence could accelerate discovery and cure bottlenecks across science and infrastructure. Others see an asymmetric bet with poor fail-safes and unclear rollback paths.
According to a survey mentioned in the discussion, only 5% of adults support fast, unregulated development. Most want strong constraints and transparent oversight.
What this means for engineering teams
If you're building or integrating advanced models, treat "beyond-human" capability thresholds as hard risk boundaries, not as marketing milestones. Even if a formal pause never lands, expect buyers, regulators, and insurers to demand stricter controls.
Practical steps you can implement this quarter
- Define redlines: enumerate forbidden capabilities (autonomous replication, cyber-offense, bio design, social manipulation) and block them at the design level.
- Institutionalize capability evals: run adversarial, jailbreak, and hazardous knowledge tests pre-release and continuously in production.
- Gate high-risk training and fine-tuning: require executive approval, risk review, and documented safety mitigations before pushing capability jumps.
- Constrain autonomy: human-in-the-loop for sensitive actions; require multi-party approval for system-level changes or real-world execution.
- Lock down interfaces: rate limiting, content filters, abuse detection, and audit logging by default.
- Publish model and system cards: document intended use, limitations, red-team results, and prohibited applications.
- Red-team as a service: rotate internal and external red teams; reward exploit discovery; fix within defined SLAs.
- Secure the pipeline: data provenance checks, least-privilege secrets, reproducible training, and guarded fine-tune endpoints.
- Incident response: stand up an AI-specific incident playbook with isolation steps, rollback plans, and public comms templates.
- Map to standards: align controls to an established framework to reduce audit pain and regulator friction.
Policy and governance guardrails
- Set a company-wide threshold definition for "beyond human" capabilities and treat crossing it as a prohibited objective.
- Create a safety council with veto power over training runs, model releases, and integrations that materially raise capability.
- Require external review for systems interacting with critical infrastructure, finance, bio, or civic processes.
- Tie compensation to safety outcomes, not just throughput or engagement metrics.
Developers: how to adapt your day-to-day
- Instrument everything: add telemetry for prompts, outputs, tool calls, and user actions tied to risk indicators.
- Prefer containment: run risky tools in sandboxes; strict scopes for agent tools; revoke-by-default permissions.
- Bias toward retrieval and constraint: prefer RAG with curated corpora over larger raw capability jumps.
- Automate safety tests in CI: block merges on failed evals for prohibited behaviors.
- Track drift: compare current outputs to baseline safety metrics; alert on regressions.
Where to go deeper
For context on pause proposals and risk framing, see prior public appeals and risk frameworks. They provide language you can mirror in internal policies and vendor questionnaires.
- Future of Life Institute: Open letter on pausing giant AI experiments
- NIST AI Risk Management Framework
Skills and training for teams
If your roadmap includes agentic systems, multi-tool orchestration, or fine-tuning, invest in safety-aligned workflows. Structured practice beats ad-hoc patches after incidents.
Practical prompt engineering resources can help your team standardize constraint patterns, eval design, and repeatable test suites.
Bottom line
The call to pause is less about fear and more about control. Build with explicit boundaries, test for failure modes you don't want to believe in, and keep a human gate on anything that can act in the real world.
Whether or not a formal ban happens, the bar for safety and oversight just moved. Teams that adopt these guardrails now will ship faster later-with fewer surprises.
Your membership also unlocks: