Global AI Regulation: How Countries Are Tackling Risks and Seeking Common Ground

Countries vary widely in AI regulation, from the US’s innovation-first approach to the EU’s strict risk-based rules. Global cooperation is key to balancing safety and progress.

Categorized in: AI News Legal
Published on: May 24, 2025
Global AI Regulation: How Countries Are Tackling Risks and Seeking Common Ground

Striking the Balance: Global Approaches to Mitigating AI-Related Risks

Modern technologies have outpaced existing legal frameworks, creating regulatory challenges worldwide. Different countries take varied approaches to AI oversight, which complicates efforts to establish a unified global standard. This divergence was evident at a recent AI Action Summit in Paris, where 60 nations signed a statement emphasizing inclusivity and openness in AI development. Notably, the UK and US did not endorse the final statement, reflecting the lack of consensus on addressing specific AI risks like security threats.

Tackling AI Risks Globally

AI regulation varies significantly across countries, generally falling between two poles represented by the United States and the European Union.

The US Approach: Innovate First, Regulate Later

The United States favors a market-driven approach with limited federal AI-specific legislation. Key laws include the National AI Initiative Act for coordinating research, and voluntary frameworks like those from the National Institute of Standards and Technology (NIST). In 2023, an executive order set AI safety standards for federal projects, but this was revoked in 2025, signaling a shift back toward prioritizing innovation over regulation.

Critics argue that the US approach results in fragmented rules lacking enforceable standards and leaves gaps in privacy protection. Despite this, legislative activity is increasing, with nearly 700 AI-related bills introduced at the state level in 2024. The US government appears intent on finding regulatory solutions that do not stifle innovation.

The EU Approach: Prevention and Risk Management

The European Union has taken a precautionary stance with the Artificial Intelligence Act (AI Act), introduced in August 2024. This law uses a risk-based framework, imposing strict rules on high-risk AI systems such as those in healthcare and critical infrastructure. Some AI uses, like government social scoring, are outright banned. Compliance is mandatory for any AI system offered within the EU market, even if developed abroad.

Concerns about the EU’s model include its complexity and the potential to hinder competitiveness. Some critics also question whether it sets a strong enough standard for human rights protection.

Finding a Middle Ground

The United Kingdom has adopted a lighter framework focused on safety, fairness, and transparency. It empowers existing regulators like the Information Commissioner’s Office to enforce principles within their sectors. The UK’s AI Opportunities Action Plan promotes investment and domestic AI development, while the AI Safety Institute (AISI) evaluates the safety of advanced AI models through collaboration with developers.

However, the UK’s approach faces criticism for limited enforcement powers and a fragmented regulatory landscape without a central authority. Other countries also position themselves between the US and EU models. For example:

  • Canada proposes the AI and Data Act (AIDA), balancing innovation with safety and ethics.
  • Japan promotes a “human-centric” AI development model with guidelines for trustworthiness.
  • China enforces tight state control, requiring security assessments and alignment with socialist values.
  • Australia has introduced an AI ethics framework and is updating privacy laws to address AI challenges.

How to Establish International Cooperation?

The differing regulatory approaches around the world complicate efforts to agree on baseline standards for AI risks such as data privacy and intellectual property. International cooperation is essential to create frameworks that protect against risks without hindering innovation.

Organizations like the Organisation for Economic Cooperation and Development (OECD) and the United Nations are actively working to develop international AI standards and ethical guidelines.

Given the speed of AI advancements, legal professionals must engage in ongoing dialogue to find common ground and help shape coherent policies. For those in the legal field seeking to deepen their knowledge of AI and its implications, exploring targeted courses can provide practical insights into emerging challenges and regulatory trends. Consider visiting Complete AI Training for specialized learning opportunities.