Harvard, Berkeley and Others Advocate for Science-Led AI Policy in Multi-Institution Study
Leading researchers from prominent universities and policy institutions—including the University of California, Berkeley; Stanford; Harvard; Princeton; Brown; the Institute for Advanced Study; and the Carnegie Endowment for International Peace—have published a co-authored article in Science calling for AI regulation grounded firmly in scientific evidence.
Titled Advancing science- and evidence-based AI policy and released on July 31, the article features contributions from UC Berkeley experts Jennifer Chayes, Ion Stoica, Dawn Song, and Emma Pierson. It argues that AI policy must be informed by rigorous scientific analysis, supported by clear processes to produce and apply credible, actionable evidence.
Establishing a Framework for Evidence-Driven Regulation
The authors present a model for evidence-based AI policy structured around three key elements:
- How evidence should inform policy decisions
- The current availability and quality of evidence across AI domains
- How regulation can drive the development of new evidence
A critical challenge highlighted is defining what qualifies as credible evidence. Since standards vary across policy areas, settling this definition is essential before evidence-based approaches can be effectively applied.
The article cautions against using the evolving nature of evidence as a justification to delay regulation, recalling past examples where industries exploited scientific uncertainty to resist oversight.
Concrete Recommendations and Regional Policy Impact
The article puts forward several policy recommendations aimed at increasing transparency and safety in AI development:
- Mandating more comprehensive safety disclosures from AI companies
- Incentivizing pre-release evaluations of AI models
- Monitoring AI systems post-deployment for potential harm
- Protecting independent researchers who analyze AI systems
- Strengthening social safeguards to mitigate risks
The authors emphasize the importance of not only credible but also actionable evidence. They propose focusing on the marginal risk—the additional risk AI poses compared to existing technologies like internet search engines—to better identify new threats and effective interventions.
This study builds on recent efforts by the Joint California Policy Working Group on AI Frontier Models, co-led by Jennifer Chayes. Their final report, The California Report on Frontier AI Policy, was submitted to Governor Gavin Newsom last month and is already influencing AI policy discussions among California legislators and agencies.
A Call for Evidence-Grounded Debate
The article concludes by acknowledging that adopting an evidence-based approach is only the beginning. It stresses the need for ongoing, informed debate to address core tensions in AI regulation. Such discussions should remain firmly rooted in the best available evidence to maintain democratic legitimacy.
For professionals in science and research interested in AI policy frameworks and practical applications of evidence-based regulation, this study provides a clear foundation for guiding future regulatory efforts.
Your membership also unlocks:
 
             
             
                            
                            
                           