Eli Lilly Partners with Nvidia to Build AI Supercomputer for Faster Drug Discovery
Eli Lilly is building a high-performance AI supercomputer with Nvidia to speed up drug discovery and development. The goal is simple: shorten the path from idea to treatment while cutting waste in the research process.
Using this system, Lilly scientists plan to train AI models on millions of experimental datasets. That scale should help the company spot and test promising drug candidates faster and with more confidence.
What's being built
The new supercomputer will be owned and operated by Lilly and built on Nvidia's DGX SuperPOD with DGX B300 systems. In practice, that means high-throughput training and inference, tight networking, and the ability to iterate on large models without bottlenecks.
Lilly says several of its proprietary models will be accessible through Lilly TuneLab, a federated AI/ML platform. Biotech partners can tap into models trained on years of Lilly research without exposing their own data.
Federated access without sharing sensitive data
The TuneLab approach lets outside companies send tasks to Lilly's models while keeping their datasets private. No raw data needs to leave the source, which lowers compliance risk and keeps IP protected.
For IT and data teams, this looks like secure endpoints, audit trails, and policy-driven access. It's a practical way to collaborate across organizations where data sharing is a hard stop.
Beyond discovery: full pipeline impact
Lilly plans to apply the supercomputer across drug development, manufacturing, medical imaging, and enterprise AI operations. Think model-assisted trial design, process optimization, and imaging analysis under one compute roof.
"Lilly is shifting from using AI as a tool to embracing it as a scientific collaborator," said Thomas Fuchs, senior vice-president and chief AI officer. That shift puts engineering teams closer to the lab bench-and closer to business outcomes.
Why this matters for IT and development teams
- Data engineering at scale: Expect pipelines for millions of experiment records, versioned datasets, and reproducible training runs. Feature stores and lineage tracking aren't nice-to-haves here.
- Model delivery and governance: Federated access means strong API design, identity and access management, monitoring, and usage quotas. You'll need clear SLAs and rollback plans.
- Privacy-first collaboration: Privacy-preserving learning patterns (federation, secure enclaves, differential privacy where applicable) will become standard for partner-facing workloads.
- Compute-aware MLOps: Queueing, scheduling, and cost controls for large training jobs on GPU clusters are essential. Observability needs to include both model performance and cluster health.
- Domain-specific evaluation: Benchmarks must reflect chemical and biological outcomes-not generic metrics. Build evaluation suites tied to lab validation.
Industry context
Pharma is steadily integrating AI into discovery and safety testing. This momentum lines up with the U.S. FDA's push to enable alternatives to animal testing, which encourages computational methods and in vitro models.
For background, see Nvidia's overview of DGX SuperPOD for large-scale AI (Nvidia DGX SuperPOD) and the FDA's work on alternative methods (FDA Advancing Alternative Methods).
Analysts at Jefferies have projected AI-related R&D spending could reach $30-$40 billion by 2040, signaling deeper reliance on AI across healthcare.
What to watch next
- API accessibility: How broadly will TuneLab models be exposed to partners, and what integration patterns will be supported?
- Model lifecycle transparency: Will Lilly share validation protocols or performance ranges to help partners assess risk?
- Security posture: Details on encryption, isolation, and monitoring for federated workloads will matter for vendor risk reviews.
- Ecosystem effects: Expect more pharma-Nvidia tie-ups and standardized toolchains for scientific AI.
Practical next steps for tech teams
- Audit your data estate for scientific AI readiness: formats, quality, lineage, and governance.
- Prototype federated or privacy-preserving workflows if you collaborate with external labs or vendors.
- Stand up GPU-aware MLOps with clear cost controls and job scheduling policies.
- Co-develop evaluation metrics with domain scientists to keep models tied to real outcomes.
If you're building skills for AI infrastructure, MLOps, or applied ML in regulated settings, explore curated learning paths and certifications here: AI courses by job.
Your membership also unlocks: