Italy Enacts First National AI Law in EU, Prison for Deepfake Abuse
Italy enacts an AI law with prison for harmful deepfakes, sector disclosures, and human oversight. First in the EU, it ties IP to human creativity and sets agencies and a €1B fund.

Italy Passes First National AI Law in the EU: What Legal Teams Need to Know
Italy has enacted a comprehensive AI law that sets criminal penalties, governance duties, and funding to steer AI use toward public interest. The statute aligns with the EU's broader AI Act and centers on transparency, human oversight, cybersecurity, privacy, and measured innovation.
Core takeaway for counsel: harmful AI-generated or manipulated content faces criminal exposure, sectoral AI use triggers disclosure and oversight duties, and IP protection hinges on human creativity. Parliament approved the bill after a year of debate, signaling broad political backing.
Criminal Liability for Deepfakes
Individuals who distribute harmful AI-generated or manipulated content, including deepfakes, face prison terms of 1 to 5 years. Penalties increase when the technology facilitates crimes such as fraud or identity theft.
For litigation and enforcement teams, expect a rise in criminal complaints and preservation requests related to synthetic media. Corporate communications and platform policies should be updated to ensure quick takedown pathways and audit trails.
Workplace and High-Impact Sectors
AI systems used in workplaces and in sectors such as healthcare, education, justice, and sports must include clear disclosure and human oversight. This places operational burdens on employers and vendors to document oversight, decision rationale, and the human role in outcomes.
Children under 14 will require parental consent to access AI services. Product counsel should verify age-gating, consent capture, and retention practices.
Intellectual Property and Data Mining
AI-assisted works are protected only if they reflect genuine human creativity, echoing the U.S. Copyright Office's stance. Rights-holders should review authorship claims and maintain records of human contributions in hybrid works.
Text and data mining with AI is limited to non-copyrighted material or scientific research by approved institutions. Dataset provenance logs, license inventories, and researcher accreditation checks become essential controls.
Enforcement and Governance
Enforcement will be handled by the Agency for Digital Italy and the National Cybersecurity Agency. The framework is consistent with the EU's approach to risk-based AI governance and human-centric safeguards.
For broader EU context, see the EU AI Act overview from official sources: EUR-Lex. Agency guidance and digital governance updates: Agency for Digital Italy.
Investment Signal
The law commits up to €1 billion via a state-backed venture fund for AI, cybersecurity, and telecommunications. While modest relative to U.S. and Chinese spending, it indicates clear government support and may influence vendor availability and M&A strategy.
Compliance Checklist for Legal and Compliance Teams
- Update criminal risk policies to cover creation and distribution of deepfakes; add rapid takedown and evidence preservation steps.
- Implement disclosure notices and human-in-the-loop procedures for AI used in HR, healthcare, education, justice, and sports contexts.
- Refresh vendor and DPA templates: audit rights, model/ dataset provenance, incident notice, and indemnities related to manipulated media.
- Verify parental consent flows for under-14 users; align consent records with privacy and retention requirements.
- Document human authorship in IP filings; add internal guidance for hybrid AI-human works.
- Limit TDM to permitted sources; keep a license register and whitelist approved research institutions.
- Establish an incident response runbook for AI misuse (fraud, identity theft, impersonation) with law enforcement touchpoints.
- Train staff in recognizing and escalating manipulated media and disclosure failures.
Comparative Note: Denmark's Face/Voice Rights
In June, Denmark granted people copyright over their faces and voices to counter digital impersonation. Italy's approach adds criminal penalties for harmful synthetic content, creating complementary civil and criminal hooks across jurisdictions.
Cross-border platforms should harmonize takedown standards and identity-verification workflows to meet both consent-based and criminal-risk frameworks.
Open Questions to Track
- How courts define "harmful" AI-generated content and apply aggravating factors tied to predicate crimes.
- Scope and accreditation process for "approved institutions" conducting research TDM.
- Interplay with final EU AI Act obligations and sectoral rules in healthcare and justice.
- Cross-border enforcement and cooperation between national agencies and platforms.
Government leaders framed the law as an "Italian way" to develop and govern AI-growth within guardrails that prioritize people, rights, and security. As enforcement ramps up, early compliance will reduce litigation exposure and protect brand integrity.
Need structured upskilling for in-house legal and compliance teams on AI risk and governance? Explore role-based learning paths: Complete AI Training.