What US healthcare companies and medical researchers need to know about Italy's new AI law
Italy and the US are tightening ties in life sciences, with cross-border AI projects growing fast. In September 2025, Italy became the first EU country to pass a comprehensive AI law (Law No. 132/2025). It sits alongside the EU AI Act [1] and brings immediate obligations for healthcare, with more guidance due by October 2026. If you build, sell, or deploy AI in Italy-or work with Italian institutions-you now have work to do.
Below is a practical breakdown of what changes, what's already enforceable, and how US healthcare organizations can move quickly without derailing ongoing clinical or research programs.
Why this matters now
- Immediate effect: National principles, transparency rules, and sector-specific requirements already apply in Italy.
- EU alignment: The law must be read consistently with the EU AI Act, which layers on risk-based requirements.
- Deadlines ahead: Additional decrees and guidance are expected by October 2026, and key EU AI Act obligations for device software hit from August 2026.
Patient rights and transparency: what you must implement
Italy's AI Law requires that patients are told when AI is used in their care [2]. This is not a generic clause in a consent packet-it's clear, specific notification tied to use.
- Embed point-of-care notices in clinical workflows (EHR flags, patient portal prompts, discharge summaries).
- Provide plain-language patient materials in Italian explaining the AI's role, limits, and clinician oversight.
- Log notifications for auditability; align with hospital SOPs and clinical risk management.
- Offer turnkey documentation with your product to reduce lift for Italian providers.
Non-discrimination and algorithmic fairness: make it measurable
The law bans AI-driven discrimination in healthcare access and requires bias testing and validation [3]. This shifts the bar from reactive complaints to proactive proof.
- Define fairness goals up front (e.g., equalized performance across gender, age, ethnicity where data exists and is lawful to process).
- Use representative training data; document provenance, curation, and known gaps.
- Run subgroup analyses during validation; publish performance ranges and known failure modes.
- Establish continuous monitoring with drift and bias alerts post-deployment in Italy.
- Maintain a single source of truth: data sheets, model cards, risk files, and traceable change logs.
From August 2026, most medical device software with AI will require MDR certification assessed by a notified body, alongside EU AI Act compliance. Expect integrated reviews of safety, transparency, and fairness. Start assembling your technical documentation and bias evidence now.
Human oversight and clinical use
Fully automated medical decisions are prohibited. Physicians must keep final say, supported by CE-certified medical device software when applicable.
- Design UIs and workflows that surface rationale, uncertainty, and contraindications to support clinical judgment.
- Enable easy overrides and second reads; record who accepted or rejected AI suggestions and why.
- Map risk allocation: MDR requires product liability insurance; align contracts, training, and SOPs with that framework.
- Provide clinician training focused on limitations, off-label risks, and known biases.
Research access and synthetic data: a new path to datasets
Italy's AI Law deems certain health-related AI research by public or private non-profits-or private entities working with them-as being of "relevant public interest" under GDPR Article 9(2)(g) [4]. That allows use of health data for training AI, plus secondary use of de-identified data without consent, subject to notification to the Italian DPA and a 30-day standstill (Art. 8(5)).
- Partner with Italian public or non-profit entities; structure governance and data roles clearly.
- Prepare DPA notifications with protocols for security, minimization, and safeguards; plan for possible objections.
- Set up compliant cross-border transfer mechanisms for any data leaving the EEA.
The law also explicitly permits processing for anonymization, pseudonymization, and synthetization (creating synthetic data) of personal and special-category health data in these research contexts [5]. AGENAS will issue anonymization guidelines after consulting the DPA. Synthetic data can expand sample diversity and protect privacy-validate utility and re-identification risk, and document your method choices.
Criminal exposure: what could put people at risk
New offenses cover distribution of deepfakes (1-5 years imprisonment) and unlawful AI-based extraction of copyrighted online content. In healthcare, think falsified medical images or unauthorized use of literature or imaging datasets.
- Inventory all training data; confirm licenses or lawful bases; retain evidence of permissions.
- Set red lines for staff: no scraping restricted sources, no synthetic doctor/patient voices or images in communications.
- Update vendor and data-source contracts with warranties and indemnities.
- Brief traveling executives and onsite teams in Italy on personal liability exposure.
Currently, these offenses apply to natural persons, meaning individual directors, officers, and employees can face criminal liability in Italy. Build procedures that protect people, not just entities.
90-day compliance playbook
- Inventory: List all AI systems used in or serving Italy; note intended purpose, MDR class, data flows, and clinical context.
- Gap analysis: Check patient notification, human oversight, and fairness testing against Italian law and the EU AI Act.
- Implement now: Ship patient-facing notices, clinician guidance, and an audit trail; enable easy override.
- Fairness program: Lock in bias metrics, subgroup validation, and ongoing monitoring; document everything.
- Certification path: For AI medical device software, align your MDR technical file with AI Act requirements; engage a notified body early.
- Research track: Identify Italian public/non-profit partners; draft DPA notification packages; define synthetic data workflows.
- Data and IP hygiene: Clean your training-data stack; fix licensing gaps; update contracts and internal policies.
- Watch the regulators: Track decrees from the Ministry of Health, AGENAS guidelines, and DPA interpretations through 2026.
What to watch through 2026
- Implementing decrees and sector guidance from Italian authorities by October 2026.
- Notified body readiness and expectations for AI fairness evidence within MDR assessments after August 2026.
- AGENAS anonymization and synthetic data guidelines and how they affect research utility vs. privacy risk.
Helpful resources
- EU AI Act (Regulation (EU) 2024/1689)
- EU Medical Devices Regulation (MDR) 2017/745
- AI for Healthcare for practical guidance on clinical AI, data governance, and bias testing.
- AI for Regulatory Affairs Specialists for teams preparing MDR and EU AI Act documentation and audits.
Bottom line
Italy's AI Law makes transparency, fairness, and human oversight non-negotiable in healthcare. Move early on patient notices, bias testing, and clinical governance, and you'll reduce risk now while setting yourself up for EU-wide compliance later.
References
[1] Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence.
[2] Italy's AI Law, Article 7(3).
[3] Id., Articles 7(2) and (6).
[4] Id., Article 8(3).
[5] Id., Article 8(3).
Your membership also unlocks: