Orange CTIO: Science First for Sustainable AI-and Less Reliance on Nvidia

Orange CTIO Bruno Zerbib calls for science-driven GenAI that's steerable, efficient, and auditable. He urges less GPU dependence and European sovereignty across models and agents.

Categorized in: AI News Science and Research
Published on: Nov 29, 2025
Orange CTIO: Science First for Sustainable AI-and Less Reliance on Nvidia

Orange CTIO Bruno Zerbib: Put Science Back at the Core of Sustainable GenAI

Orange CTIO Bruno Zerbib is pushing for a return to fundamental science to make generative AI sustainable, steerable, and safe. He also called out the sector's dependency on a single GPU vendor, and urged Europe to build sovereignty at the model and agent layers while the hardware gap remains.

LLMs are hard to fix - that's the problem

Zerbib's core critique is simple: today's large language models behave like opaque neural networks. If a model goes off-track, you don't "tweak a knob"; you retrain on massive datasets and hope the behavior shifts.

That's slow, compute-hungry, and unreliable. His conclusion: work with startups and researchers to invent architectures and methods that are genuinely steerable and auditable.

R&D directions that move the needle

  • Knowledge editing and local updates: Methods to correct facts or behaviors without full retraining.
  • Modular systems: Retrieval-augmented pipelines and tool-use to externalize knowledge instead of baking everything into parameters.
  • Hybrid neuro-symbolic approaches: Add constraints and rule-checking layers to enforce policies at generation time.
  • Interpretability-first tooling: Map concepts to circuits and units so teams can diagnose and intervene with precision.
  • Efficiency by design: Sparsity, quantization, and Mixture-of-Experts to cut energy, cost, and retraining cycles.

GPUs, dependency, and European sovereignty

Zerbib flagged the "monopoly" dynamic around Nvidia GPUs. He pointed out the dissonance of talking sovereignty while relying on one supplier for critical compute. This is not an anti-US stance, he noted, and he expects stronger competition to emerge in the US.

Building a competitive EU GPU stack is a long shot in the near term. His proposal: set a five-to-ten-year ambition, and in the meantime back French, British, and European LLMs, and make the agent layer fully sovereign. For context on the policy push, see the European Chips Act.

Agentic AI: avoid the "snowball effect"

Agent frameworks link models, tools, and services. One weak link can cascade across the chain. Orange is evaluating how to curate and vet agents so they're trustworthy and reliable.

  • Curated registries: Signed agents with capability cards, versioning, and dependency transparency.
  • Sandboxing and least privilege: Capability scoping, time-boxed tokens, and audited I/O.
  • Policy enforcement: Guardrails at plan, tool, and output layers; safe fallbacks on uncertainty spikes.
  • Observability: Traces, red-team replays, and automatic rollback on anomaly detection.
  • Human-in-the-loop: Checkpoints for high-impact actions and irreversible changes.

If you're formalizing governance, the NIST AI Risk Management Framework offers a practical scaffold for controls and evaluation.

Sustainability needs science, not just more features

Zerbib rejects the binary of ignoring climate or giving up on innovation. His "third option" is to direct top mathematical and physics talent back into scientific research that yields breakthroughs, not incremental features.

For labs and teams, that means measuring energy and emissions, running training on cleaner grids, prioritizing small specialized models where possible, and shipping compression and MoE as defaults. Publish energy footprints with model cards and hold yourself to budgets.

What science and research leaders can do this quarter

  • Stand up a knowledge-editing track and compare it against full retraining for specific defect classes.
  • Prototype a modular RAG+tools stack to externalize knowledge and reduce parameter churn.
  • Add constraint-based decoding and post-generation verifiers for policy-critical use cases.
  • Instrument carbon and energy telemetry in training and inference; set per-project budgets.
  • Define an agent vetting pipeline: registration, capabilities review, sandbox policy, and kill-switch.
  • Run a bake-off across GPU options where viable; plan for EU/UK model and agent sovereignty regardless of hardware.

Zerbib's message is blunt: LLMs won't get easier to steer by wishing. If we want sustainability, safety, and sovereignty, we need scientific advances that make these systems editable, interpretable, and efficient by design.

Building team capability for agentic systems and LLM operations? Explore structured paths by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
🎉 Black Friday Deal! Get 86% OFF - Limited Time Only!
Claim Deal →