Will Regulation Make the EU the Most Trusted Power in AI?
When the European Union passed the AI Act, the goal was clear: ensure artificial intelligence respects fundamental rights and remains safe for citizens. To support the law’s rollout starting next year, the EU introduced a voluntary Code of Conduct for companies developing AI. However, some firms argue that joining this code could slow down innovation.
The EU is inviting creators of generative AI chatbots—like ChatGPT, Mistral, Gemini, and Claude—to sign this voluntary Code of Conduct on general-purpose AI. Signing means their products are considered compliant with the AI Act, which came into effect in 2024 and categorizes AI risks from minimal to unacceptable. Companies that opt out may face stricter inspections and administrative hurdles.
Big names such as OpenAI and Anthropic support the code. Others, including Meta, have declined to sign, citing concerns about innovation being stifled. Meta has even launched tools they cannot fully deploy in Europe due to data protection rules. Regardless of whether companies sign, the AI Act itself will take precedence.
Progressive Implementation and Enforcement
The AI Act will be implemented gradually through 2027. New rules for general-purpose AI models—those that power generative chatbots—come into force this month. Companies have two years to comply. However, any new AI models entering the market must follow the law immediately. Non-compliance may lead to fines as high as €15 million.
Can Regulation and Investment Coexist?
The voluntary Code of Conduct provides guidance on respecting copyright, managing systemic risks from advanced AI, and improving transparency about how companies meet legal requirements.
Some experts see this regulation as a strategic move by the EU to become the most trusted global AI provider. Unlike the EU, the US and China have lighter regulations and prioritize attracting investment in AI.
Yet, financial support and regulation can work hand in hand. The EU recently announced over €200 billion in AI investments. Maintaining leadership in AI development is important, but it must be paired with a strong safety framework that upholds fundamental rights and promotes human-centered AI systems.
Addressing AI Risks and Promoting Literacy
Generative AI carries risks such as deepfakes, data theft, and even mental health concerns linked to chatbot use. The EU hopes that the AI Act’s obligations on AI literacy will spark continent-wide campaigns and training. This would help citizens grasp both the benefits and dangers of AI technologies.
For professionals interested in expanding their knowledge on AI and its implications, Complete AI Training offers a range of courses that cover AI fundamentals, safety, and ethical considerations.
Your membership also unlocks: