AI at the World Scientists Summit: Acceleration, Governance, and the Human Choice
Artificial intelligence is now a decisive force across culture, the economy, and science. At the Artificial Intelligence Sciences Forum, held during the World Scientists Summit alongside the World Governments Summit 2026, more than 100 scientists-including Nobel laureates and leaders of major research institutions-examined how AI will influence jobs, policy, and the speed of discovery.
The message was consistent: the future of AI is a societal decision, not a foregone conclusion. Outcomes depend on human choices-how we invest, what we regulate, and where we place AI in the loop.
The Economy, Jobs, and the Long Build
Professor Christopher Pissarides framed AI as part of a long arc of economic development. Technology tends to transform jobs more than it eliminates them, and quick miracles are unlikely. Adoption will be slowed by the basics-capital expenditure, energy supply, communication networks, and reskilling.
His advice was practical: invest now in infrastructure and energy, be patient on productivity, and prioritize systems that enhance human work rather than replace it. The bottlenecks are real, but they are solvable with deliberate policy and steady execution.
- Map roles to augmentation: redesign workflows so AI handles routine analysis while people handle judgment and oversight.
- Budget for compute, storage, and energy early; treat these as core R&D utilities, not afterthoughts.
- Stand up reskilling programs tied to actual tools and tasks, not abstract theory.
- Set realistic productivity timelines; pilot, evaluate, then scale.
Science at 10,000x: Experiments, Compute, and Cost
Professor Michael Levitt underscored that science advances through experimentation-and AI makes far more of it possible. By slashing the cost and time of computational work, researchers can explore orders of magnitude more ideas, with estimates near 10,000x speedups in some tasks.
Methodological barriers are thinning. Cross-disciplinary work-once slow due to tooling and language differences-is becoming more natural and frequent.
- Build high-throughput experiment cycles: simulation → model selection → targeted wet-lab validation → feedback.
- Treat compute as an experimental variable: log runs, costs, and outcomes for reproducibility and budgeting.
- Adopt cross-disciplinary teams and shared taxonomies to minimize translation costs between fields.
AI as Collective Systems: Data Flows and Incentives
Professor Michael Jordan described modern AI as large-scale technological social networks-systems that help people interact with accumulated human knowledge. These systems already support logistics, healthcare, and transportation, which puts data flows, incentives, and governance at center stage.
He also cautioned: foundation models are strong with prior knowledge, but less efficient at producing genuinely new scientific discoveries. Human-led inquiry and experimental design remain essential.
- Make data governance a first-class design problem: provenance, access controls, auditability, and incentives.
- Use human-in-the-loop protocols where novelty matters; require explicit uncertainty estimates and decision thresholds.
- Separate knowledge retrieval from hypothesis generation and testing to avoid false confidence.
Control, Trust, and Misinformation
Professor Whitfield Diffie argued that society will continue adopting AI, and warned against opaque control schemes that hand broad authority to AI within systems that lack transparency. Trust requires visibility, verification, and the ability to challenge outputs.
In a panel moderated by Professor Cohen, Dr. Kaishen Dong highlighted AI's upside for education and research-but warned against student overreliance. Dr. Stuart Haber called deepfakes a direct threat to shared truth and urged international cooperation and cryptographic verification to protect information integrity.
- Adopt content provenance and verification standards in your lab communications and publications (see C2PA).
- Sign datasets, models, and key outputs; record lineage for audits and reproducibility.
- Establish AI-use policies for students and staff: disclosure, citation, and independent verification.
What This Means for Research Leaders
Leadership in AI demands big infrastructure and long-term capital, paired with clear guardrails. The opportunity is to expand scientific throughput while keeping humans in charge of priorities, evaluation, and ethics.
- Infrastructure roadmap: compute, storage, networking, and energy capacity sized to your next 24-36 months.
- Workforce: budget time for reskilling and validation skills; consider focused learning paths that tie to daily work (see AI courses by job).
- Governance: adopt an internal AI risk playbook aligned to recognized frameworks like the NIST AI RMF.
- Experiment velocity: standardize data pipelines and evaluation metrics so teams can test more ideas with less friction.
- Integrity: deploy provenance, cryptographic signing, and review protocols to keep results trustworthy.
The through line across all sessions was clear: expand the frontier of science with AI, but keep humans deciding what to test, why it matters, and how results are judged. That balance-augmentation over replacement-will determine who builds durable advantages in research.
Your membership also unlocks: