ACHILLES Project: Simplifying EU AI Act Compliance for Greener, Trustworthy AI
The EU AI Act is progressing steadily. Initial bans and transparency duties came into effect in early and mid-2025, with full obligations for high-risk AI systems scheduled for August 2026. ACHILLES, a €9 million Horizon Europe project launched in November 2024, supports organisations in bridging the compliance gap without compromising AI performance or sustainability.
The project’s approach centers on human-centric, energy-efficient machine learning (ML) guided by a clear legal and ethical framework. After eight months, ACHILLES has moved from concept to delivery. Key legal and ethical milestones include two major outputs: D4.1 Legal & Ethical Mapping and D4.4 Ethics Guidelines. These distill hundreds of pages of legislation into practical recommendations. Following a legal workshop and technical requirements gathering, four pilot use cases are refining their problem statements and evaluation frameworks. The ACHILLES Integrated Development Environment (IDE) is also taking shape, promising transparent, compliant AI development with embedded documentation to facilitate auditing.
Real-World Use Cases
ACHILLES validates its approach through four pilots in healthcare, security, creativity, and quality management.
- Montaser Awal, Director of Research at IDnow, highlights ACHILLES’ role in building privacy-preserving, compliant AI for identity verification with less dependence on real data, improving model quality and robustness.
- Marco Cuomo from Cuomo IT Consulting notes that ACHILLES’ tools and frameworks accelerate pharma AI projects by letting domain experts focus on their specialty while maintaining compliance.
- Nuno Gonçalves of the University of Coimbra’s Institute of Systems and Robotics emphasizes the project’s facilitation of collaboration between research institutions and industry, improving ML models while respecting privacy and security.
From Rules to Practical Implementation
Early on, ACHILLES established a rigorous legal framework. The D4.1 Legal & Ethical Mapping deliverable aligns EU and international laws—including the AI Act, GDPR, Data Act, Medical Devices Regulation, and cybersecurity legislation—with the IDE and pilot projects. Its companion, D4.4 Ethics Guidelines, translates these mappings into actionable checklists, consent templates, and bias-audit scripts.
The legal analysis covers fundamental rights, AI regulation, data protection, and sector-specific rules. It also addresses ethical challenges like informed consent, facial recognition accuracy, algorithmic bias, hallucinations in generative AI, and overall trustworthiness. This foundation provides project partners with a clear checklist of applicable requirements that evolve alongside project developments.
In June 2025, KU Leuven’s CiTiP team ran an internal workshop where each pilot completed a detailed legal questionnaire. This session helped clarify the applicability of various legal frameworks and informed updates to use case definitions. Future workshops, including public ones, are planned to continue this dialogue.
While risk classifications may shift as pilots mature, the ACHILLES IDE itself is considered a limited-risk AI system under the AI Act. This mandates informing users they’re interacting with software, not humans. When handling personal data—especially sensitive biometric or health information—GDPR rules apply, often requiring explicit consent or scientific research exemptions and possibly a Data Protection Impact Assessment (DPIA). The EU Cyber Resilience Act’s cybersecurity requirements may also be relevant, particularly for healthcare use cases.
Ethical concerns like algorithmic bias, hallucinations, and automation bias remain central and will be integrated into ongoing project developments.
An Iterative Compliance Loop
ACHILLES adopts a four-phase compliance cycle that runs throughout AI development:
- Map: Identify and analyze applicable legal, ethical, and regulatory norms.
- Design and Build: Develop trustworthy, transparent, and compliant AI systems using the ACHILLES IDE.
- Test: Pilot AI solutions in real-world settings, measuring performance and compliance via KPIs.
- Refine: Update legal and ethical mappings and tools based on pilot feedback, restarting the cycle.
Compliance Made Easy: The ACHILLES IDE
Building AI that complies with EU regulations currently involves juggling complex documents and spreadsheets. The ACHILLES IDE streamlines this by integrating specifications, code, documentation, and evidence into one workspace. It follows a specification-first approach: start with business, legal, and ethical requirements, then scaffold code and proofs accordingly.
The IDE includes a powerful Copilot that offers recommendations throughout the project lifecycle—from compliance checklists to specific technical tools like bias auditing for medical images. It uses an innovative Standard Operating Procedures language (SOP Lang) to create flexible, controllable workflows where humans and AI agents collaborate. Every decision is logged for transparency.
Developers can monitor training, generate transparent documentation such as Model Cards, and track system performance post-deployment to detect drifts or degradation that may require retraining. The IDE also facilitates exporting compliance evidence and audit trails, connecting decision-makers, developers, users, and regulatory bodies efficiently.
Looking Ahead
As ACHILLES approaches its first-year mark, pilots are finalizing their definitions and evaluation frameworks. Technical development accelerates with toolkits for bias detection, explainability, and robustness, alongside the IDE’s ongoing implementation. Multidisciplinary workshops continue, with some open to external participants, covering explainability, human oversight, bias mitigation, and compliance verification.
ACHILLES is also collaborating with other EU-funded projects focused on AI-driven data operations and compliance technologies to share tools, workshops, and dissemination efforts.
The project aims to demonstrate that, by the time the AI Act is fully enforceable, trustworthy and sustainable AI can be a competitive advantage—not a bureaucratic burden—for European innovators.
Disclaimer: This project has received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101189689.
Your membership also unlocks: