Rishi Bommasani on Bridging AI Research and Policy for Responsible Governance
Rishi Bommasani bridges AI research and policy, advocating for evidence-based governance and interdisciplinary collaboration. His work influences major AI regulations and promotes responsible development.

Fostering Effective Policy for a Brave New AI World: A Conversation with Rishi Bommasani
Date: September 08, 2025
Topics: Regulation, Policy, Governance
Rishi Bommasani, senior research scholar and policy fellow at Stanford's Institute for Human-Centered AI, bridges technical AI research and societal governance. Since starting his academic career nine years ago, his focus has expanded from building AI systems to addressing their broad social and economic impacts.
Recently, Bommasani co-authored a paper in Science with 19 other experts, outlining a vision for evidence-based AI policy. In this discussion, he shares insights on his contributions, predictions for AI governance, and ongoing challenges in the field.
Shifting Research Focus Alongside AI’s Growth
AI’s deployment pace has accelerated considerably, shrinking the gap between research and real-world use. Since beginning his PhD in 2020, Bommasani shifted from developing AI models to exploring governance frameworks. He notes the increasing difficulty for academia to lead on AI development due to the capital intensity, but stresses the need for academic voices in policy discussions.
Bridging AI research and policy has become a central concern, emphasizing interdisciplinary collaboration to guide responsible AI development.
Key Contributions to AI Policy and Governance
Bommasani highlights two major impacts from his work. First, the 2021 paper coining “foundation models” provided a conceptual framework that influenced major policy efforts such as the EU AI Act and U.S. Executive Order on AI. This term has become foundational in defining how large-scale AI models are understood and regulated.
Second, his efforts to close the gap between AI research and policy involved leading consensus-building on open-source large language models. This included advising the European Commission on implementing the EU AI Act and collaborating with U.S. agencies like the White House and NTIA. The approach emphasized evaluating new AI risks in the context of existing technologies, shaping current U.S. policy on open models.
Beyond Regulation: Exploring Market and Business Incentives
Governance doesn’t rely solely on public policy. Bommasani points out that in the U.S., many digital technologies have operated with little government intervention, relying more on market forces. Re-engineering business incentives might offer a more agile and lasting approach than regulation alone, especially given the slow pace and political shifts in government policy.
Why Evidence-Based AI Policy Matters
Defining credible evidence in AI policy is complex. Unlike fields such as public health or economics, AI requires balancing real-world data with theoretical models. Bommasani advocates for creating standards that encourage faster, more reliable evidence generation—for example, by supporting third-party AI testing through legal safe harbors similar to “white hat” hacking in cybersecurity.
Such measures would allow independent researchers to evaluate AI systems without fear of retaliation, improving transparency and safety.
Challenges Unique to AI Governance
General-purpose technologies like AI affect society at a foundational level, far beyond traditional tech regulation. AI’s risks are broad and intertwined, including bias, privacy violations, cybersecurity threats, and geopolitical tensions. Unlike technologies with more narrow applications, AI reshapes social and economic structures.
Bommasani notes that while progress is visible in areas like self-driving car safety, understanding and improving safety in language models remains uncertain. Compounding problems like privacy and bias persist from older technologies, making AI governance particularly difficult.
Current Status of AI Policy
Most AI policy ideas remain speculative, with few implementations and limited data on their effectiveness. However, Bommasani sees encouraging trends: dedicated government bodies focused on AI and a growing community of academic and non-governmental organizations studying governance. This expanding engagement increases the likelihood of developing effective policies.
The Role of Interdisciplinary Collaboration
Interdisciplinary centers like Stanford HAI play a crucial role by connecting scholars across fields to tackle AI’s societal impact. Bommasani’s 2021 “foundation models” paper involved over 100 researchers from 10 departments, highlighting the importance of legal, economic, and political perspectives alongside technical expertise.
Such collaboration is essential as AI continues to intersect with all aspects of society, raising questions that no single discipline can answer alone.
For those interested in deepening their understanding of AI policy and governance, exploring interdisciplinary courses and training programs can provide valuable perspectives. Resources such as Complete AI Training’s latest courses offer practical insights into AI’s societal implications and regulatory frameworks.