Wodan AI raises €2M to push encrypted AI into production for Europe
Wodan AI closed a €2 million pre-seed round led by JME Ventures, Swanlaab, and Adara Ventures, with ScaleFund also participating. The company builds technology that lets machine learning, computer vision, and LLMs run on fully encrypted data-no decryption step, no plaintext exposure.
For teams in finance, defense, healthcare, and critical infrastructure, this means model inference on sensitive data while meeting strict EU privacy and sovereignty requirements. It's a direct path to reduce data exposure risk without pausing AI adoption.
Why this matters for engineering and security teams
- Privacy by default: Homomorphic encryption keeps data encrypted end-to-end. The provider, platform, and infrastructure don't see plaintext.
- Compliance alignment: Easier alignment with GDPR and sovereignty rules, and a cleaner audit story for cross-border processing. See the EU work on the AI Act for context here.
- Risk reduction: Reduces blast radius from insider threats, vendor breaches, or model-serving leaks.
- Broader workload coverage: Wodan AI is targeting encrypted ML, CV, and LLM workloads rather than niche demos.
What Wodan AI is building
The platform applies homomorphic encryption to enable AI operations directly on ciphertext. Practical takeaway: sensitive inputs never leave encrypted form, yet models can still compute useful outputs. That is the core promise of privacy-preserving AI.
Recent progress in HE schemes and implementation techniques have made this viable for more real-world use. For a primer on the core ideas behind homomorphic encryption, see the community standardization effort HomomorphicEncryption.org.
Funding, focus, and go-to-market
CEO Bob Dubois said the round will help expand the technology and deepen their position in secure AI for Europe. The company plans to relocate its global HQ to Madrid and consolidate R&D there, concentrating on advanced cryptography, ML, and privacy technologies.
Wodan AI is also running a pilot with a Spanish financial institution-early proof that encrypted AI can fit critical-sector requirements. CTO and co-founder Manuel Pérez Yllan emphasized Spain's talent base and the goal to build a European hub for private, secure AI.
What to expect next
- Expanded capabilities: More advanced encrypted computer vision and LLM features, with deployment aimed at production environments.
- R&D hiring: Growth in cryptography, systems, model optimization, and MLOps to reduce overhead and improve latency.
- Commercial push: Contracts with European organizations in strategic sectors.
Practical notes for implementation teams
- Performance trade-offs: Expect overhead vs. plaintext inference. Start with narrow, high-value use cases (e.g., PII-heavy scoring, document processing, or CV on sensitive imagery) and measure end-to-end latency.
- Key management: Treat key custody and rotation as first-class. Align with HSM/KMS policies, access controls, and incident playbooks.
- MLOps integration: Plan for encrypted data pipelines, monitoring that preserves privacy, and careful handling of logs/metrics to avoid accidental plaintext leakage.
- Threat modeling: Update assumptions. If the platform never sees plaintext, vendor and infra risks shift-good for audits, but verify side channels and metadata exposure.
This funding is a signal: encrypted AI is moving from research into production for regulated workloads. If you're exploring private AI in Europe, this is a development to track-and a nudge to plan your privacy-by-design architecture now.
Level up your team
If you're building internal capabilities around AI security, MLOps, or compliance, explore curated training paths for developers and data teams here.
Your membership also unlocks: