Imperial College London Wins Top QCE25 Awards for Quantum-AI Advances in Typhoon Forecasting and Distributed Quantum Neural Networks

Imperial College London's QuEST won big at IEEE QCE25 for practical quantum-AI. Typhoon forecasts and distributed quantum networks plug cleanly into existing AI stacks.

Published on: Nov 03, 2025
Imperial College London Wins Top QCE25 Awards for Quantum-AI Advances in Typhoon Forecasting and Distributed Quantum Neural Networks

Quantum-AI Wins Put Imperial College London Out in Front

November 2, 2025 - The promise of quantum computing just took a practical step forward. Researchers at Imperial College London's Centre for Quantum Engineering, Science and Technology (QuEST) earned top honors at the IEEE International Conference on Quantum Computing and Engineering (QCE25) for work that connects quantum mechanics with real AI workloads.

Two projects stood out: a parameter-efficient approach to forecast typhoon trajectories, and a distributed quantum neural network that runs across geographically separated nodes. Both show how quantum components can plug into existing AI stacks without hype-just measurable progress.

What won-and why it matters

Distinguished Technical Paper + Best in Quantum Applications Track: "Quantum-Enhanced Parameter-Efficient Learning for Typhoon Trajectory Forecasting," led by Dr. Louis Chen. The team focuses on getting more from fewer parameters and limited qubits, aiming for better forecasts without bloated models.

Second Best Technical Paper (Advances in Photonic Quantum Computing Track): "Distributed Quantum Neural Networks," from the Distributed Quantum Computing (DQC) project led by Professor Kin K. Leung. The group demonstrated a distributed quantum neural network running in a hybrid high-performance setup-integrating photonic quantum processors from ORCA Computing with NVIDIA AI infrastructure at the PoznaΕ„ Supercomputing and Networking Center. The approach was highlighted by NVIDIA and points to a data-center-ready model that mixes quantum and classical resources.

How the hybrid system actually works

  • Parameter-efficient learning: Compact quantum-enhanced models aim to reduce training overhead while keeping accuracy competitive-useful when qubits are scarce.
  • Distributed quantum neural networks: Photonic quantum processors handle specialized subroutines; classical GPUs coordinate training and aggregation.
  • Geographically separated nodes: The system trains across sites without forcing everything into one lab or rack.
  • Data-center fit: The workflow treats quantum hardware like another accelerator attached to an AI stack, which makes integration more straightforward for HPC teams.

Why this is useful for researchers and R&D teams

  • Practical path: You can test quantum components inside current pipelines instead of waiting for a perfect qubit count or error rates.
  • Interoperability: Works with mainstream AI tooling (NVIDIA-class infrastructure), lowering the barrier to experiments.
  • Scalability strategy: Distribute workloads across quantum and classical nodes; scale by adding nodes, not just bigger monolithic systems.
  • Use-case focus: Weather prediction today; healthcare and other data-heavy fields next-where sample efficiency and latency actually matter.

Quantum cloud services are getting real

This model looks a lot like how cloud and HPC centers already schedule accelerators. Photonic quantum processors can sit near GPU clusters, with job orchestration routing the right tasks to the right hardware. That's a blueprint operators understand-helpful for pilots that need to show value quickly.

What to watch next

  • Error management: Smarter error mitigation that doesn't blow up training time.
  • APIs and standards: Cleaner interfaces for hybrid quantum-classical workflows so teams can swap vendors without rewrites.
  • Metrics that matter: Accuracy per parameter, wall-clock training time, energy per inference, and reliability across nodes.
  • Applied benchmarks: Typhoon tracking is a strong testbed; expect similar pilots in medical imaging, optimization, and scientific simulation.

Who's involved

  • Imperial College London (QuEST): Research leadership and systems integration.
  • ORCA Computing: Photonic quantum processors.
  • PoznaΕ„ Supercomputing and Networking Center: High-performance computing environment.
  • NVIDIA: AI infrastructure and tooling featured in the demonstration.

Learn more

See the IEEE conference hub for context on the awards and tracks: IEEE QCE.

Upskill your team

If you lead research or engineering and want a clear path into AI workflows that can later integrate quantum components, explore role-based learning here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide