EU Consultation Sets the Course for High-Risk AI Classification in Life Sciences and Healthcare
The European Commission’s consultation seeks input on classifying high-risk AI in life sciences under the EU AI Act. Guidelines due in February 2026 will aid compliance by August 2026.

European Commission Consultation Shapes High-Risk AI Classification for Life Sciences in the EU
Published on 19th June 2025
The European Commission has opened a consultation running from 6 June to 18 July 2025 to clarify how AI systems in life sciences will be classified as “high risk” under the EU AI Act. This initiative aims to gather input from stakeholders to inform the Commission’s guidelines, expected by February 2026. These guidelines will be essential for pharmaceutical, medical technology, and digital health organisations preparing to meet the AI Act’s high-risk requirements ahead of the August 2026 deadline.
The AI Act’s High-Risk Classification Explained
The AI Act, effective since 1 August 2024, sets a common legal framework for AI across the EU, focusing on trustworthy and human-centric AI while protecting health, safety, and fundamental rights. It uses a risk-based system where the strictest rules apply to high-risk AI systems.
High-risk AI includes systems that are safety components of products regulated under EU legislation such as the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), listed in Annex I. It also covers standalone AI systems used in high-risk sectors like healthcare, detailed in Annex III.
The upcoming guidelines will clarify the practical application of these classifications, including a detailed list of what counts as high-risk. This is particularly important in life sciences, where the distinction between AI embedded in medical devices and standalone AI tools (like clinical decision support software) affects the applicable regulatory requirements and timelines.
High-Risk AI in Healthcare and Life Sciences
Healthcare is a key focus area under the AI Act, with medical devices and in vitro diagnostics clearly listed as high-risk. AI’s role in diagnostics, treatment planning, patient monitoring, and research is acknowledged, along with the risks posed by malfunction or misuse.
For example, an AI diagnostic tool integrated into an MRI scanner is likely classified as high-risk since it is part of a regulated medical device. Standalone AI applications supporting clinical decisions or managing patient data could also be high-risk depending on their function and impact.
There are narrow exemptions when AI systems only perform preparatory tasks or do not materially affect decisions, but these exemptions require strict documentation and registration.
The consultation seeks concrete examples from stakeholders to clarify the boundaries, especially for AI tools used in clinical trials, patient management, or lab automation. Correct classification is critical to avoid unnecessary regulatory burdens or gaps.
Key Focus Areas in the Consultation
- Classification rules under article 6(1) and Annex I for AI as safety components
- Classification under article 6(2) and Annex III for standalone high-risk AI
- General questions on classification criteria
- Requirements and obligations for providers and other actors in the AI value chain
- Potential updates to the list of high-risk use cases and prohibited AI practices
Particular attention is on defining “safety component” and whether AI used for monitoring, prevention, or harm mitigation should be included. This matters for IVD manufacturers using AI for equipment performance monitoring or failure prediction.
The consultation also explores how AI-specific requirements will interact with existing EU legislation like the MDR and IVDR, which are under targeted review.
Responsibilities of providers and deployers, especially regarding “substantial modification” of AI systems after market placement, are addressed. This is crucial for digital health companies where software and AI evolve continuously.
Wider Implications
The consultation is a key step for the Commission to deliver detailed, practical guidance on high-risk AI classification. These guidelines will influence how authorities and notified bodies enforce the AI Act across sectors and Member States.
The AI Act’s risk-based framework aims to be proportionate and flexible, but effectiveness depends on clear definitions and consistent application. The Commission will also review and update the lists of high-risk use cases and prohibited AI practices annually to keep pace with technological and societal changes.
What Healthcare Organisations Should Do Next
This consultation offers life sciences organisations a chance to contribute to important regulatory guidance. While participation isn’t mandatory, engaging can help anticipate changes and support a balanced compliance environment that fosters innovation.
Companies in biotech, medtech, and digital health should review their AI tools against the AI Act’s classification criteria and consider submitting practical examples or requests for clarification, especially if the risk status is unclear or if existing regulations already cover safety concerns.
Monitoring this consultation’s progress and preparing for the Commission’s guidelines publication in early 2026 will be critical for meeting the August 2026 compliance deadline.
For healthcare professionals looking to expand their AI knowledge and compliance skills, resources and courses are available at Complete AI Training.