AI Viewed More Negatively Than Climate Science or General Science
A recent study from the Annenberg Public Policy Center (APPC) examines how Americans perceive artificial intelligence (AI) compared to climate science and science overall. Since ChatGPT’s public release in late 2022, AI has been a hot topic, stirring both excitement and concern.
Understanding public attitudes toward AI is essential because these perceptions influence how the technology develops and is regulated. The APPC researchers surveyed a nationally representative sample of U.S. adults to assess opinions on AI science and scientists. They compared these views to perceptions of climate science and general scientific fields using a framework called the “Factors Assessing Science’s Self-Presentation” (FASS), which measures credibility, prudence, unbiasedness, self-correction, and benefit.
Key Findings: AI Scientists Viewed More Negatively
- AI scientists receive less favorable ratings than climate scientists or scientists in general.
- The main driver of this negativity is concern about prudence—specifically, the belief that AI research is causing unintended consequences.
- Despite AI becoming more common in everyday life, public perceptions did not improve significantly from 2024 to 2025.
This suggests that the public remains wary about AI’s potential risks, especially around how carefully scientists manage the technology’s impact.
Political Factors and Polarization
Science perceptions often reflect political divides. Climate science has been heavily politicized, and confidence in medical and general scientists declined among Republicans following the COVID-19 pandemic. However, the study finds that AI perceptions are less politically polarized.
This lack of polarization indicates that AI has not yet become a partisan issue in the U.S. The lead researcher notes that recognizing these negative perceptions can help shape clearer, more transparent communication about AI risks and regulations.
Implications for Science Communication and Policy
Public unease about unintended consequences highlights the need for ongoing, transparent evaluation of AI research and its governance. Effective messaging that addresses these concerns can build trust and support for responsible AI development.
For professionals working in science and research, these insights stress the importance of engaging with public concerns and communicating AI progress with clarity and accountability.
Those interested in expanding their expertise in AI technologies and responsible AI practices can explore specialized courses available at Complete AI Training.
Your membership also unlocks: