Dasheng takes on real-world science as China's system-level AI partner

Dasheng is a system-level lab agent from Shanghai AAI and Fudan, uniting multimodal reasoning, long-term memory, and self-driving labs. Safer, faster loops with clear logs.

Categorized in: AI News Science and Research
Published on: Mar 04, 2026
Dasheng takes on real-world science as China's system-level AI partner

Meet "Dasheng": a system-level AI agent for real lab work

On March 1, the Shanghai Academy of AI for Science and Fudan University unveiled an upgraded "NovaInspire: Scientist-Centered AI Open Platform" and introduced its core module: Dasheng.

Dasheng-nicknamed after the Monkey King-is a system-level scientific AI agent that brings together multimodal foundation models, long-term multi-threaded collective memory, expert-level scientific "skills," self-driving laboratories, and a secure, trustworthy framework.

What sets Dasheng apart

  • Multimodal reasoning: Works across text, data tables, code, spectra, and images. Useful for unifying papers, ELN notes, figures, and raw instrument output in one loop.
  • Long-horizon memory: Keeps threaded context across projects, experiments, and collaborators. Ideal for multi-week campaigns and iterative protocols.
  • Expert "skills": Encodes domain tasks-literature triage, hypothesis drafting, method adaptation, parameter suggestion, code generation, and result critique.
  • Self-driving labs: Orchestrates instruments, runs closed-loop experiments, and updates models with measured results. See a foundational overview of self-driving laboratories in Nature (2020).
  • Secure-by-design: Emphasizes safety, auditability, and trust. Clear logs, permissions, and isolation to protect people, IP, and data.

Why this matters for your workflow

  • From ideas to instruments: Turn hypotheses into executable plans, schedule runs, collect data, and refine settings-without handoffs breaking context.
  • Tighter feedback loops: Use Bayesian or heuristic search to explore parameter spaces and converge faster on useful results.
  • Less glue work: Reduce time spent stitching together ELNs, LIMS, scripts, and spreadsheets. Keep provenance traceable end-to-end.
  • Institutional memory: What was learned last quarter persists-methods, edge cases, calibrations-so teams don't repeat dead ends.

Getting your lab ready

  • Map the stack: Instruments, drivers/APIs, data formats, ELN/LIMS, storage, and compute. Close gaps that block automation.
  • Standardize data: Use schemas, units, and metadata that support reuse and reanalysis. The FAIR principles are a practical baseline.
  • Define guardrails: Permissions, sandbox runs, kill switches, and safety interlocks. Require human sign-off for risky steps.
  • Start small: Pilot on a stable, well-instrumented task (e.g., reaction temperature tuning, solvent screening, or calibration routines) before scaling up.
  • Measure impact: Track cycle time, yield/accuracy improvements, reagent use, and error rates. Keep a clear before/after.

Risks to plan for

  • Hallucinations and overconfidence: Require citations, simulation checks, and sanity bounds on proposed parameters.
  • Instrument drift and brittleness: Schedule reference runs, validation plates, and periodic recalibration.
  • Safety and compliance: Enforce SOPs, chemical/biological safety limits, and audit logs that satisfy internal and external reviews.
  • Data leakage/IP exposure: Keep data local where needed, use fine-grained access control, and redact sensitive fields.
  • Objective misspecification: Optimize for the right metric-include constraints for cost, safety, and generalizability.

Early use cases worth testing

  • Materials and chemistry: Catalyst or electrolyte screening, thin-film deposition settings, reaction optimization under resource limits.
  • Bio and health: Media optimization, assay parameter tuning, purification gradients, and automated QC checks.
  • Physics and engineering: Controller gains, optical alignments with feedback, and repeatable environmental scans.
  • Compute-only loops: Literature synthesis, method transfer suggestions, dataset curation, and error analysis.

What to watch next

  • Deeper toolchains: Tighter links between ELN/LIMS, schedulers, robotics, and model-based planners.
  • Standards for trust: Shared formats for provenance, evaluations, and safety attestations across institutions.
  • Multi-agent setups: Specialized agents for planning, execution, and critique coordinating as one system.

Upskill your team

If you're preparing to adopt agents and self-driving workflows, this practical route map helps with tooling, data, and lab integration: AI Learning Path for Research Scientists.

Dasheng signals a clear direction: research that keeps momentum from idea to instrument, with safety and provenance built in. The labs that prepare their data, guardrails, and teams now will feel the compounding benefits first.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)