Science Context Protocol: a common layer for AI agents across labs
Most AI systems built for research still live in silos. A-Lab, ChemCrow, Coscientist-useful, but locked into local workflows that don't extend across institutions.
The Science Context Protocol (SCP), from Shanghai Artificial Intelligence Laboratory, proposes a shared protocol layer so AI agents, researchers, and lab equipment can work together securely and traceably. It builds on Anthropic's Model Context Protocol (MCP), now widely adopted for connecting AI models to external tools and data.
AI-powered research, without the walls
SCP targets two big gaps: consistent access to scientific resources and end-to-end experiment orchestration. The protocol standardizes how tools, models, databases, and physical instruments are described and accessed, so agents can discover and compose capabilities across institutions.
Then it manages the full experiment lifecycle-registration, planning, execution, monitoring, archiving-with fine-grained auth and a complete audit trail that spans code and wet lab steps.
What SCP adds beyond MCP
- Richer experiment metadata: full protocol structure, parameters, environments, versions, and provenance-so plans are reproducible and comparable.
- Central hub (not peer-to-peer): a global registry and "brain" that knows about tools, datasets, agents, and instruments, and routes work to the right servers.
- Experiment flow API: orchestration that decomposes goals, parallelizes runs, tracks dependencies, and applies fallbacks when anomalies show up.
- Standardized device drivers: a uniform way to integrate lab robots and instruments into autonomous workflows.
How the architecture works
Clients-humans and agents-submit a research goal to the SCP hub. The hub uses AI models to analyze the goal, break it into tasks, and propose several executable plans with rationales: dependencies, duration, risk, and cost.
Selected workflows are stored as structured JSON "contracts" that every participant follows. During execution, the hub monitors progress, validates outputs, triggers warnings, and can switch to fallback strategies. This is especially useful for multi-stage work that mixes simulation with physical experiments.
What's already live
The team built an Internal Discovery Platform on SCP with 1,600+ interoperable tools. By domain: biology 45.9%, physics 21.1%, chemistry 11.6%, with the rest split across mechanics, materials science, math, and computer science.
By function: computational tools 39.1%, databases 33.8%, model services 13.3%, lab operations 7.7%, literature search 6.1%. Examples range from protein structure prediction and docking to automated pipetting instructions for lab robots.
Use cases: protocol extraction to drug screening
- Protocol extraction and execution: a scientist uploads a PDF. The system parses the steps, converts them to a machine-readable format, and runs the experiment on a robotic platform with validation and error handling.
- AI-controlled drug screening: starting with 50 molecules, the workflow scores drug-likeness and toxicity, filters by criteria, prepares a protein structure for docking, and flags two promising candidates-coordinated across database, computation, and structure-analysis servers.
These scenarios are promising, but actual performance will depend on adoption, device compatibility, and data quality in live labs.
Why this matters for your lab
- Interoperability across institutions: share tools and protocols without rewriting glue code for each collaboration.
- Reproducibility by default: every step is defined, versioned, and traceable across computational and physical stages.
- Throughput and parallelism: high-volume experiments and multi-agent coordination become first-class features.
- Governance: fine-grained permissions, compliance hooks, and auditable trails help meet institutional and regulatory needs.
How to prepare (practical steps)
- Inventory your assets: tools, datasets, models, instruments; document interfaces and access policies.
- Standardize metadata: parameters, versions, environment specs, provenance. Treat protocols as data.
- Containerize compute tools and define resource limits; mirror public datasets locally when needed.
- Pilot safely: start read-only integrations and dry-run workflows; add write and device control after review.
- Align with LIMS/ELN: map identifiers, samples, and audit trails; avoid parallel record-keeping.
- Set policy early: permissions for agents, human-in-the-loop checkpoints, risk thresholds, and kill-switches for devices.
- QA and validation: golden datasets, synthetic controls, and calibration checks for lab hardware.
- Train your team on agent orchestration and failure modes; don't skip postmortems.
Open questions
- Standard adoption: will vendors and institutions ship compatible drivers and endpoints?
- Data governance: cross-border data flow, IP, and patient/sample privacy controls.
- Safety: fail-safes for physical instruments and robust anomaly detection in closed-loop experiments.
- Interoperability: how SCP will coexist with existing lab and HPC schedulers without creating duplication.
Learn more
- MCP background from Anthropic: Model Context Protocol overview
- SCP spec and reference implementation are available as open source on GitHub (per the authors).
Skill up your team
If you're planning pilots or need structured upskilling for AI-enabled research workflows, explore focused programs here: AI courses by job.
Your membership also unlocks: