AI Giants Fail Safety Tests as New Studies Expose Alarming Gaps in Risk Management

Studies reveal major AI companies score poorly in risk management, with no firm above "weak." Experts warn of serious safety gaps amid growing AI threats.

Categorized in: AI News Management
Published on: Jul 18, 2025
AI Giants Fail Safety Tests as New Studies Expose Alarming Gaps in Risk Management

AI Companies Fall Short in Risk Management, Studies Find

Recent studies reveal that leading AI companies demonstrate concerningly low commitment to managing risks associated with their technologies. These findings highlight serious gaps in safety measures, even as AI's potential threats grow—from enabling cyberattacks to the risk of uncontrollable advanced systems.

Two nonprofit organizations, SaferAI and the Future of Life Institute (FLI), published these assessments aiming to push top AI firms toward better safety standards. Their goal is clear: spotlight which companies genuinely act on their promises regarding AI risk mitigation.

SaferAI's Risk Management Ratings

SaferAI evaluated major AI companies based on their risk management protocols, also referred to as responsible scaling policies. The results were sobering: no company scored above a "weak" rating. Anthropic led with 35%, followed closely by OpenAI at 33%, Meta at 22%, Google DeepMind at 20%, and Elon Musk’s xAI at 18%.

Some companies declined in performance since the last review in October 2024. Both Anthropic and Google DeepMind saw drops in their scores, allowing OpenAI to surpass Google this time. A notable issue was Google releasing its Gemini 2.5 frontier model without sharing adequate safety details, described by SaferAI’s founder as a significant lapse.

A spokesperson from Google DeepMind emphasized their ongoing commitment to AI safety but noted that the studies do not account for all their efforts or industry standards.

Anthropic’s Drop in Safety Commitments

Anthropic’s score declined partly because it removed commitments targeting insider threats shortly before launching its Claude 4 models. This move drew criticism for poor risk management practices. The study also noted that more detailed evaluation criteria contributed to some score changes.

On a positive note, xAI and Meta showed the largest improvements since October, with xAI moving from 0% to 18% and Meta from 14% to 22%.

Wider Safety Review by Future of Life Institute

FLI’s study took a broader look, assessing not just risk management but also how companies address current harms, existential risks, governance, and transparency. Independent experts reviewed public documents, research, and some nonpublic information provided by the companies.

Anthropic scored highest with a C+, OpenAI received a C, and Google a C-. Both xAI and Meta scored a D.

However, when it came to “existential safety”—the ability to control AI that might surpass human intelligence—all companies scored D or below. This indicates a widespread lack of clear plans to manage the most extreme risks.

Max Tegmark, president of the Future of Life Institute, summarized the findings bluntly: companies aim to build superintelligent AI but lack solid strategies to keep such systems under control.

What This Means for Management Professionals

For those managing AI projects or overseeing technology strategies, these findings serve as a warning. Implementing strong, transparent risk management protocols is essential—not just for regulatory compliance but to safeguard organizational and societal interests.

Staying informed about AI safety standards and encouraging your teams to adopt responsible practices can mitigate risks before they escalate. Consider exploring dedicated AI risk management training or courses to build expertise in this critical area.

For practical guidance and relevant AI courses, visit Complete AI Training’s latest offerings.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)