The EU AI Act: A Comprehensive Overview
The European Union’s Artificial Intelligence Act, known as the EU AI Act, is being called the world’s first comprehensive AI law. After years of development, it is becoming reality for the 450 million people across the 27 EU countries. Importantly, the Act is not limited to Europe alone—it applies to any company providing or deploying AI systems within the EU market. For instance, it governs both developers of AI tools, like CV screening software, and the banks or businesses that use them. This legal framework sets clear rules for AI usage in Europe and beyond.
Why does the EU AI Act exist?
The Act ensures a uniform legal framework on AI across all EU member states. This uniformity aims to facilitate the free movement of AI-based goods and services across borders without conflicting local rules. The EU wants to create a level playing field, build trust in AI technologies, and open opportunities for emerging companies. However, the framework is strict—the Act sets high standards for what AI should and shouldn’t do in society, even though AI adoption is still in early stages in many sectors.
What is the purpose of the EU AI Act?
According to EU lawmakers, the main goal is to promote "human-centric and trustworthy AI" while ensuring strong protection of health, safety, fundamental rights, democracy, the rule of law, and environmental protection. The Act aims to prevent harmful effects of AI systems and support innovation simultaneously. This balance is delicate—defining what counts as "human-centric" and "trustworthy" AI is key. The Act reflects the challenge of encouraging AI adoption without compromising ethical and legal standards.
How does the EU AI Act balance its different goals?
The Act uses a risk-based approach to regulation. It bans uses of AI deemed to pose "unacceptable risks." For "high-risk" AI applications, it imposes strict regulatory requirements. Meanwhile, AI systems with "limited risk" face lighter obligations. This tiered approach is designed to curb potential harm while allowing innovation where risks are lower.
Has the EU AI Act come into effect?
The rollout began on August 1, 2024, but the Act will come fully into force through staggered deadlines. New market entrants face earlier compliance requirements than existing providers. The first significant deadline was February 2, 2025, enforcing bans on certain prohibited AI uses, such as untargeted scraping of facial images from the internet or CCTV feeds to build databases. Most other provisions are set to apply by mid-2026.
What changed on August 2, 2025?
Starting August 2, 2025, the EU AI Act applies to “general-purpose AI” (GPAI) models with systemic risk. GPAI models are trained on large datasets and can perform a wide range of tasks. The systemic risk arises from potential misuse, such as lowering barriers to chemical or biological weapons development or loss of control over autonomous systems. The EU published guidelines for GPAI providers, including major players like Anthropic, Google, Meta, and OpenAI. Existing providers have until August 2, 2027, to fully comply.
Does the EU AI Act have teeth?
The Act includes penalties designed to be effective and dissuasive, even for large global companies. Penalties vary by risk level. Violations involving prohibited AI uses can result in fines up to €35 million or 7% of global annual turnover, whichever is higher. Providers of GPAI models can face fines up to €15 million or 3% of annual turnover. Member states will specify enforcement details, but the framework clearly signals serious consequences for non-compliance.
How fast do existing players intend to comply?
The voluntary GPAI code of practice, which includes commitments like avoiding training on pirated content, offers insight into early engagement with the Act’s requirements. In July 2025, Meta chose not to sign the code, signaling resistance. Google, meanwhile, signed despite reservations. Other signatories include Aleph Alpha, Amazon, Anthropic, Cohere, IBM, Microsoft, Mistral AI, and OpenAI. Signing the code does not necessarily mean full endorsement, but it shows willingness to engage.
Why have some tech companies been fighting these rules?
Concerns center on the potential impact of the EU AI Act on innovation and legal certainty. Google’s global affairs president expressed worry that the Act and code could slow AI development and deployment in Europe. Meta’s chief global affairs officer called the EU’s approach “overreach,” citing legal uncertainties and measures that go beyond the Act’s scope. Some European AI leaders also asked for a two-year delay before key obligations take effect, fearing the rules could hinder competitiveness.
Will the schedule change?
In July 2025, the EU rejected calls to pause the rollout and confirmed it will stick to the current timeline. The August 2, 2025, deadline was implemented as planned. Further updates will be monitored closely as the Act continues to unfold.
For legal professionals working with AI compliance, staying informed about these deadlines and requirements is essential. The EU AI Act establishes new standards that will shape AI deployment in Europe for years to come.
For more detailed courses on AI regulation and compliance, visit Complete AI Training.
Your membership also unlocks: