Eli Lilly and Nvidia Team Up on AI Supercomputer to Fast-Track Drug Discovery and Development

Eli Lilly is teaming up with Nvidia to build a DGX SuperPOD that speeds drug discovery and development. TuneLab lets partners use Lilly models without exposing their data.

Categorized in: AI News IT and Development
Published on: Nov 01, 2025
Eli Lilly and Nvidia Team Up on AI Supercomputer to Fast-Track Drug Discovery and Development

Eli Lilly partners with Nvidia to build an AI supercomputer for faster drug discovery

Eli Lilly is teaming up with Nvidia to build a high-performance supercomputer aimed at compressing the time it takes to discover and develop new medicines. For IT and engineering teams, this signals a clear shift: pharma is moving core R&D into large-scale AI workflows backed by serious compute.

What's being built

The system is based on Nvidia's DGX SuperPOD with DGX B300 systems and will be owned and operated by Lilly. The goal is simple: train AI models on millions of experimental datasets to identify and test drug candidates more effectively, then carry those gains into development and manufacturing.

Several of Lilly's proprietary models will be available through Lilly TuneLab, a federated AI/ML platform that lets biotech firms tap models trained on years of Lilly data-without sharing their own proprietary data. This setup preserves privacy while letting partners benefit from well-trained models.

Why this matters for engineers

Federated access, model reuse, and strict privacy constraints are becoming table stakes in regulated AI. Building and running systems like this touches every layer-from data ingestion to distributed training to validation and monitoring.

  • Data engineering: normalize assay, omics, and imaging datasets; enforce lineage and governance; keep feature stores consistent across teams.
  • MLOps: automate training and evaluation; track versions and datasets; enforce reproducibility and bias checks before promotion.
  • Infrastructure: schedule multi-node GPU jobs; optimize storage throughput; design networks for high IO; contain costs with quotas and right-sizing.
  • Security and compliance: protect PHI and IP; use secrets management, fine-grained access, and strong audit trails; align with GxP expectations.
  • Observability: full telemetry across training and inference; drift detection; cost and utilization dashboards for GPUs, storage, and networking.

Where Lilly plans to apply it

  • Discovery: generative design, property prediction, and active learning loops to prioritize experiments faster.
  • Development and manufacturing: process modeling, QA/QC, and yield optimization.
  • Medical imaging: model-assisted labeling and multi-modal diagnostics.
  • Enterprise AI: internal knowledge search, code assistants, and decision support.

As Thomas Fuchs, senior vice-president and chief AI officer, put it: "Lilly is shifting from using AI as a tool to embracing it as a scientific collaborator."

Architecture at a glance

Expect a high-performance GPU cluster architecture with distributed training, fast interconnects, and scalable storage to support large, multi-institutional datasets. Nvidia's DGX SuperPOD reference design sets the foundation for throughput and manageability at this scale.

If you want a deeper look at the reference stack, see Nvidia's overview of DGX SuperPOD.

Federated access without sharing data

TuneLab's federated approach lets external partners run or fine-tune models without exposing their raw data. For teams dealing with proprietary datasets and strict privacy rules, this approach reduces integration friction and legal overhead while keeping model performance high.

Industry context

Pharma is integrating AI into discovery and safety testing, and this fits with the U.S. FDA's push to reduce animal testing by supporting new alternative methodologies. For reference, see the FDA's work on Advancing NAMs.

Analysts at Jefferies have projected AI-related R&D spending could reach $30-$40 billion by 2040, highlighting how fast this approach is becoming standard across healthcare.

What builders can do next

  • Pilot federated patterns: design APIs for model access that keep proprietary data local while sharing model outputs or gradients where appropriate.
  • Invest in data foundations: build a strong ontology, metadata, and lineage strategy; standardize schemas early to avoid rework.
  • Codify validation: define statistical acceptance criteria, bias checks, and safety tests that gate deployment in regulated settings.
  • Plan GPU efficiency: adopt mixed precision, distributed strategies, and job preemption to keep utilization high and spend in check.
  • Upskill teams: ensure engineers and scientists share a workflow-shared tooling, shared dashboards, and shared standards.

If you're leveling up on MLOps and GPU engineering, explore practical developer paths here: AI certification for coding.

Bottom line: Lilly's move brings serious compute, privacy-aware access, and end-to-end workflows under one roof. For IT and development teams, the opportunity is clear-build for scalability, safety, and collaboration from day one.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide