How US Tech Giants Are Diluting Europe’s AI Code of Practice

US firms are pushing to weaken the EU AI Code of Practice, risking public trust and the integrity of AI regulation. The Commission must prioritize European citizens over corporate interests.

Published on: Jul 01, 2025
How US Tech Giants Are Diluting Europe’s AI Code of Practice

How US Firms Are Weakening the EU AI Code of Practice

The EU AI Act is the first broad framework aiming to regulate artificial intelligence, particularly targeting the most powerful models known as general-purpose AI (GPAI). To streamline compliance for a select group of leading AI companies, the EU is drafting a Code of Practice. This process involves four expert-led working groups and nearly 1,000 stakeholders from industry, academia, and civil society. The final Code is expected by August 2025.

However, as the process nears completion, a handful of leading US companies have gained privileged access and are pushing for a watered-down version of the Code. This intense lobbying threatens the legitimacy of the process and conflicts with the AI Act’s goal to protect European citizens' interests. It also contradicts these companies’ stated commitment to the public good.

An Inclusive Process — But Who Does It Serve?

The drafting of the Code has been transparent and inclusive, led by 13 top scientists and involving a broad range of stakeholders. GPAI providers have always had a key role. Yet, the real test is whether these US companies understand that AI rules are a public matter, not theirs to decide alone. By pushing the European Commission to prioritize their interests over those of all other stakeholders, these firms risk undermining the whole process and damaging their credibility as responsible corporate citizens.

Some US companies equate weaker regulation with greater innovation, hoping to benefit from the EU’s ambition to lead in AI. But this view misses the mark. As noted by AI industry leaders, Europe’s main challenges are market fragmentation and slow AI adoption, not over-regulation. Meanwhile, criticism of the AI rulebook from US political figures adds to the suspicion that Big Tech’s support for the Code is more about political influence than genuine cooperation.

The Code Has Become Overly Politicized

In an attempt to show openness to innovation and ease tensions across the Atlantic, some EU actors treat Big Tech’s signatures as a measure of the Code’s success. This approach is harmful. It allows companies to use refusal to sign as leverage to weaken the Code. Furthermore, several US firms have refused to sign to align with their government’s opposition to European regulation. For example, Meta announced in early 2025 it would not sign, months before the Code was finalized, illustrating how signing has become detached from the Code’s actual content.

The Code should be a practical compliance tool, not a political bargaining chip.

Consequences of Not Adhering to the Code

Companies that choose not to follow the Code must demonstrate compliance through alternative measures that meet the AI Act’s standards. This is a complex and costly process, as the Code provides a clear and straightforward compliance path. Without it, firms face significant hurdles in proving their AI systems meet legal requirements.

Since the AI Act is binding law in Europe, companies wanting access to the EU market must comply. If Big Tech refuses to adhere to the Code or if the Commission does not formalize its validity, two outcomes are likely. First, the Commission will have to enforce the AI Act directly with possibly stricter rules. Second, US companies may face increased civil liability under US law for negligence if they fail to take reasonable care, such as signing and following the Code, especially given the global risks posed by GPAI.

The “Complain, Then Comply” Pattern

Regulation aims to align companies’ actions with public interest by setting clear standards. It’s common for companies to initially resist new rules as impractical. For instance, Google once claimed it couldn’t handle "right to be forgotten" takedowns but now processes hundreds of thousands annually. Similarly, car manufacturers often push back on stricter emission targets only to meet them later.

The European Commission should avoid falling for such tactics. While providers may find the new AI rules restrictive or vague, the Commission must hold firm to the AI Act’s intent—protecting European rights and setting a reasonable standard of care. The European Parliament’s special committee overseeing AI Act implementation signals zero tolerance for non-enforcement.

Once rules are in place, companies innovate to comply. For GPAI, this means investing in safer, more transparent, and trustworthy AI models. Establishing a clear standard of care will also make AI risks insurable, which can foster greater adoption and growth. Lack of trust and safety is a major barrier to AI uptake in Europe, a view shared by industry leaders and investors alike. A strong Code of Practice could thus drive both compliance and innovation.

Why the European Commission Must Resist Pressure

The Commission has a duty to ensure the Code reflects the AI Act’s spirit, protecting European citizens and the public interest. The drafting process brought together over 1,000 stakeholders, making it uniquely inclusive. If their efforts are overridden by the preferences of a few powerful AI companies, it would undermine civic engagement and democracy in the EU.

Importantly, the Commission can adopt the Code with or without the signatures of the involved companies. This adoption would establish the Code as the official method for assessing GPAI compliance. Non-signatories would still need to comply if they want to operate in Europe, and potentially in other regions recognizing the Code as a global standard of care.

For those interested in further AI governance and compliance training, resources are available at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide