Clear, Harmonised AI Rules: Michèle Finck's Case for Smarter Regulation
AI is the defining technology of this era, and the law has to keep pace. That's the core message from Michèle Finck, Professor of Law and Artificial Intelligence at the University of Tübingen. Her view is simple: society sets its rules through law, and new technologies should sit inside those rules, not outside them.
Finck stresses AI's dual-use nature. The same systems helping detect cancer or advance brain research can be misused to design harmful substances. That's why she argues for clear, predictable guardrails that maintain space for innovation while managing real risk.
Harmonisation beats fragmentation
The EU's AI Act, in force since August 2024, is built to create common standards across the single market, not to slow development. Finck describes it as a legislative umbrella: a broad framework with chapters that apply differently across AI systems and actors, depending on the risk and use case.
Take high-risk systems. Medical devices such as pacemakers fall in this bucket and face tight requirements on data quality, cybersecurity, and reliability. The aim is practical: if a system can affect health, safety, or fundamental rights, it needs stronger controls.
For a high-level overview, see the European Commission's page on the AI Act.
AI Act and GDPR: separate tracks that often run together
Finck draws a clean line: the AI Act mostly targets how systems function; the GDPR targets personal data. In many deployments, both will apply, and teams will need to read them alongside each other.
Expect meaningful penalties. The AI Act follows the GDPR model with fines as a share of turnover or high fixed amounts. Yet Finck does not expect the Act to force most companies to rip up their business models; the obligations are moderate when you plan ahead.
For the GDPR text, see Regulation (EU) 2016/679.
Enforcement will differ-at least at first
Early application will vary across member states. Each country must appoint enforcement bodies, fund them, and staff them with the right expertise. That alone creates unevenness in the short term-something counsel should anticipate in cross-border operations.
The law's biggest flaw: drafting quality
Finck is blunt about the AI Act's weaknesses: it is long, complex, and at points unclear. There are cross-reference errors and uneven definitions that create uncertainty for lawyers trying to map obligations to real systems.
She expects the EU to clean up parts of the text through the upcoming Digital Omnibus Package, which aims to streamline recent digital regulations. Until then, legal teams should document assumptions and be ready to adjust as guidance lands.
The dual-use reality calls for balance
Finck uses a simple analogy: a knife can cook dinner or cause harm. AI is the same. The answer is not blanket restriction but proportionate rules that reduce misuse risk while keeping the door open for research and practical benefits-new therapies, better efficiency, and offloading dull tasks so people can focus on meaningful work.
What legal teams should do now
- Inventory AI use across products, operations, and vendors. Classify systems by risk.
- For high-risk use, lock in data quality, cybersecurity, and reliability testing. Keep documentation audit-ready.
- Plan for dual compliance: align AI Act obligations with GDPR processes (DPIAs, lawful basis, data minimisation).
- Update contracts: allocate AI Act responsibilities, logs, security, and incident reporting with suppliers.
- Set up post-market monitoring and a clear escalation path for incidents and model updates.
- Track national authority appointments and early guidance. Expect uneven enforcement patterns.
- Train product, risk, and procurement teams so compliance is built in, not bolted on.
Why this work matters
For Finck, the draw is the surge of new legal questions-and the chance to build frameworks that steer AI toward practical benefits. The AI Act is central to a functional single market; without it, cross-border providers would face 27 different regimes instead of one set of rules.
Bottom line for counsel: treat the AI Act as a system of obligations, not a single rule. Get the basics right, document your choices, and stay flexible as clarifications arrive. That's how you protect your organisation without slowing progress.
If your team needs structured upskilling on AI tools and governance, explore role-based options here: Complete AI Training - Courses by Job.
Your membership also unlocks: