AI Deregulation for Innovation Faces Strong Pushback from Citizens, Global Survey Finds

Governments shift focus from strict AI regulation to boosting innovation, but citizens demand strong oversight to protect rights and ensure safety. Public trust depends on balancing growth with accountability.

Categorized in: AI News Government
Published on: Jul 29, 2025
AI Deregulation for Innovation Faces Strong Pushback from Citizens, Global Survey Finds

Governments Push to Ease AI Regulation, But Citizens Push Back

Across the globe, governments have shifted their stance on AI regulation. Initially, many aimed to build strong frameworks that ensured AI was fair, responsible, and respectful of fundamental rights. For example, the Biden administration’s 2023 Executive Order promised standards for AI safety, privacy, equity, and innovation. The European Commission introduced the AI Act with a focus on human-centric AI, emphasizing values and rights.

But recent developments show a pivot. The European Commission’s new AI Continent Action Plan highlights innovation, competitiveness, and tech sovereignty as top priorities. Regulation is now viewed more as a barrier than a safeguard. There are talks about simplifying or even pausing enforcement of key AI laws. The focus is increasingly on boosting the AI industry and fostering rapid growth, sometimes at the expense of earlier commitments to public values.

Industry Pressure and Policy Shifts

This shift aligns with pressure from powerful tech companies and political actors, including the US under President Trump, who has pushed for minimal regulation to secure global AI dominance. Within Europe, influential tech lobbyists and some member states advocate for a pro-business reset, arguing that competition will better serve consumers than strict regulation.

However, throughout these debates, one important group remains largely unheard: the citizens. When governments discuss AI regulation, citizens’ perspectives on safety, rights, and innovation often take a back seat.

What Citizens Think About AI Regulation

To fill this gap, a global survey was conducted in April 2025 across six countries: Brazil, Denmark, Japan, the Netherlands, South Africa, and the US. Nearly 7,000 people participated, representing a broad demographic spectrum.

Key Findings on Priorities

  • Protection of human rights is the top priority everywhere.
  • Economic well-being and national security follow closely, with some variation by country.
  • Religious beliefs and local traditions rank lowest but are more significant in Brazil, South Africa, and the US.
  • Technological innovation matters, especially in Japan, South Africa, and the US.
  • Environmental concerns are central in Brazil; in Denmark and the Netherlands, social relationships take precedence over environmental protection.

Views on Regulation and Governance

  • Citizens reject the idea that tech companies should develop AI without government oversight.
  • They want governments to decide when AI is safe or unsafe.
  • Regulation should focus on high-risk AI systems, rather than all AI.
  • Developers should design AI that respects users’ rights and ensures safety.
  • Users want the right to complain if their rights are violated.
  • There is strong support for involving citizens in AI design processes.

These views were consistent across the six countries surveyed.

Implications for Government Officials

These findings send a clear message to policymakers and regulators: citizens expect their governments to maintain strong oversight over AI technologies. The push to reduce regulation in favor of rapid innovation faces significant public resistance.

Public trust hinges on balancing innovation with safety, rights, and clear accountability. Ignoring these demands risks alienating the very people AI is meant to serve. Governments must ensure AI development includes citizen input and safeguards against harm.

For officials working on AI policy, this means:

  • Prioritizing human rights and economic security in AI frameworks.
  • Designing regulatory approaches that focus on high-risk AI applications.
  • Ensuring transparency and avenues for citizens to raise concerns.
  • Engaging the public actively in decision-making processes.

Without public buy-in, AI innovation may lack legitimacy and face resistance. Governments should balance industry interests with the long-term needs and rights of citizens.

Further Resources

Government professionals interested in expanding their understanding of AI policy and regulation can explore training options at Complete AI Training. These courses focus on practical skills and governance challenges related to AI.

For more on international AI regulatory efforts, the European Commission’s AI Act provides a starting point.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide