Posted: 14 November 2025, 13:29
AI in the legal spotlight: legal control or voluntary restraints?
AI is now a lever of advantage across every strategic domain - science, media, industry, transport, medicine, agriculture, finance, defense, and space. That makes governance a first-order legal issue, not a side note. The global community is moving fast to set the rules, and alignment (or misalignment) will define both market access and national security posture.
"With the ability to self-learn, this tool can destroy humanity if it is let out of control⦠On the one hand, modern technologies create thousands of new opportunities and prospects. On the other hand, they generate many risks and threats - fake news, disinformation, attacks on critical infrastructure." - President of Belarus Aleksandr Lukashenko, CSTO summit, November 28, 2024.
Risk-based regulation: where the EU drew the lines
The EU AI Act, proposed in 2021 and approved in 2024, establishes a harmonized framework with obligations tied to risk. The core idea is straightforward: the higher the potential harm to health, safety, or fundamental rights, the tighter the controls. For legal teams, that translates into classification first, compliance next.
- Prohibited AI (unacceptable risk): Systems using subliminal manipulation or exploiting vulnerabilities in ways likely to cause physical or psychological harm.
- High-risk AI: IT products and uses that present a significant threat to health, safety, or fundamental rights. These require risk management, data and technical documentation, human oversight, robustness, post-market monitoring, and incident reporting.
- Other AI systems: Largely permitted with minimal obligations, given the Act's maximum harmonization approach limits further national add-ons.
The Act also introduces duties for general-purpose and generative models (e.g., data governance disclosures, copyrighted content handling, and model transparency). For primary text and legal citations, see the official publication of the AI Act on EUR-Lex: Regulation (EU) 2024/1689.
Italy's comprehensive law: national scaffolding for EU alignment
Italy moved early, adopting a dedicated law on September 18 that sets principles for research, testing, development, and application of AI systems. Parliament delegated detailed rulemaking to the government to align national law with the EU AI Act and to regulate data, algorithms, and mathematical methods used for training.
A national coordination committee now steers policy across structures tied to digital innovation and AI. Copyright gets explicit protection: works created with AI can qualify when they reflect the author's intellectual contribution, and using online works with AI tools is allowed only if unprotected or for scientific research and cultural heritage purposes.
CIS model law: a blueprint for member states
Across the post-Soviet space, the CIS Interparliamentary Assembly adopted the model law "On AI Technologies" (Resolution No. 58-8, April 18). It spans the full AI lifecycle - research, design, development, evaluation, verification, operation, maintenance, monitoring, control, and disposal - and embeds principles like human rights priority, technical reliability and safety, transparency, oversight, and personal data protection.
Sanctions and liability are left to national legislation, giving member states room to set thresholds and enforcement mechanisms that fit their institutions and markets.
What the EU bans outright
- Emotion monitoring in workplaces and schools.
- AI systems that manipulate people subconsciously or exploit vulnerabilities with likely harm.
Additionally, certain hiring uses (e.g., automated sorting of job applications) and improvements to generative AI tools face strict conditions under high-risk or GPAI obligations. Expect technical documentation, risk controls, and transparency duties to be the default path to compliance.
Military exceptions: what the Act leaves out
The AI Act carves out defense and military applications. That omission keeps sensitive capabilities outside civilian oversight and, in practice, sustains strategic freedom of action. For counsel supporting dual-use products, this boundary makes scoping and contract language (civilian vs. defense end-use) essential.
Depth and flexibility: EU vs. CIS approaches
The CIS model law is highly specific across lifecycle phases and leaves broader latitude for national tailoring. The EU Act is equally comprehensive but strictly harmonized, limiting member states' ability to add divergent requirements for non-high-risk AI. Your compliance workload will depend on which market you serve and which authority leads enforcement.
Practical checklist for legal teams
- Inventory and classify: Map all AI uses to EU categories (prohibited, high-risk, other). Identify GPAI exposure for generative models.
- Basis and boundaries: Update privacy, IP, and data sourcing policies (copyright exceptions, research and heritage carve-outs, dataset licenses).
- High-risk controls: Implement risk management, data governance, human oversight, robustness testing, logging, and post-market monitoring.
- Transparency: Provide clear user information and, where required, labels for AI interactions and generated content.
- HR and education: Remove emotion recognition and similar banned functions from workplace and school deployments.
- Vendor contracts: Require technical documentation, risk artifacts, incident reporting, and audit rights from providers.
- Governance: Stand up an AI oversight committee; define accountability, approval gates, and incident escalation.
- Jurisdiction watch: Track EU implementing acts, national decrees (e.g., Italy), and CIS member transpositions for local obligations.
Global stakes, policy choices
AI is a geopolitical factor as much as it is a market force. It can accelerate development or amplify systemic risks. That puts societies - and their legal systems - under pressure to balance innovation with accountability.
Debates over information control, civil liberties, and social influence run through every legislative draft. The legal task is clear: keep AI a tool for progress, without letting it erode rights, safety, or democratic oversight.
Want structured training on AI use by job role? Explore curated options here: AI courses by job.
Your membership also unlocks: