OpenAI and Microsoft join UK-led AI safety coalition with £27m boost for alignment research

OpenAI and Microsoft back the UK's AISI Alignment Project, lifting funds past £27m and seeding 60 projects across 8 countries. Next grants open this summer for safer rollout.

Categorized in: AI News IT and Development
Published on: Feb 20, 2026
OpenAI and Microsoft join UK-led AI safety coalition with £27m boost for alignment research

OpenAI and Microsoft back UK Alignment Project with fresh funding: what it means for engineering teams

Published 19 February 2026

OpenAI and Microsoft have joined the UK's AI Security Institute (AISI) international coalition to scale alignment research-work that keeps advanced AI systems safe, secure, and under control.

With a new £5.6 million pledge from OpenAI and additional support from Microsoft and others, the fund now tops £27 million. The first grants have been awarded to 60 projects across 8 countries, with a second round opening this summer.

What's new

  • £27m+ now committed to AISI's flagship Alignment Project; includes £5.6m from OpenAI and support from Microsoft and other partners.
  • 60 projects funded across 8 countries; second grant round opens this summer.
  • Grantees get research funding, access to compute, and mentorship from AISI scientists.
  • Announcement made by Deputy Prime Minister David Lammy and AI Minister Kanishka Narayan at the AI Impact Summit in India (Friday 20 February).

Why this matters for IT and engineering leaders

Alignment is about making sure AI systems do what you intend-consistently, under distribution shift, and at higher capability levels. That means fewer surprises in production, tighter controls for autonomy, and clearer pathways for governance and audit.

Expect more practical methods, benchmarks, and tooling to harden AI features in production: evals, red-teaming, sandboxing, and monitoring that actually catches drift and unsafe outputs before they hit customers.

Where this shows up in your stack

  • Pre-deployment: capability and safety evals, adversarial prompts, jailbreak testing, and regression suites tied to risk tiers.
  • Inference controls: policy-constrained tool use, rate limits, circuit breakers, and content filters tuned with empirical tests (not vibes).
  • System design: sandboxed agents, scoped permissions, time and budget limits, safe planning heuristics, and approval gates for high-impact actions.
  • Training/finetuning: preference optimization and constitutional methods focused on reliability, not just "nicer" outputs.
  • Observability: real-time monitoring for refusal failures, goal misspecification, privacy leaks, and prompt injection-plus incident workflows.
  • Governance: model cards, evaluation reports, and auditable logs that satisfy internal review and external scrutiny.

Who's backing the Alignment Project

  • OpenAI and Microsoft
  • Canadian Institute for Advanced Research (CIFAR)
  • Australian Department of Industry, Science and Resources' AI Safety Institute
  • Schmidt Sciences
  • Amazon Web Services (AWS)
  • Anthropic
  • AI Safety Tactical Opportunities Fund
  • Halcyon Futures Safe AI Fund
  • Sympatico Ventures
  • Renaissance Philanthropy
  • UK Research and Innovation (UKRI)
  • Advanced Research and Invention Agency (ARIA)

Expert advisory board

  • Yoshua Bengio (Université de Montréal; Mila)
  • Zico Kolter (Carnegie Mellon University)
  • Shafi Goldwasser (Simons Institute, UC Berkeley)
  • Andrea Lincoln (Boston University)
  • Buck Shlegeris (Redwood Research)
  • Sydney Levine (Google DeepMind)
  • Marcelo Mattar (New York University)

What leaders are saying

David Lammy, Deputy Prime Minister: AI offers huge opportunities, but safety has to be built in from the start. Support from OpenAI and Microsoft will help push this work forward.

Kanishka Narayan, AI Minister: Trust is still a major blocker to adoption. Alignment research tackles that directly so AI can deliver benefits safely and for everyone.

Mia Glaese, VP of Research at OpenAI: As systems grow more capable and autonomous, alignment must keep pace. No single organisation will solve the hardest problems alone-independent teams testing different approaches are essential.

Why the UK is a hub for this work

The UK hosts leading AI labs, top research institutions, and four of the world's top ten universities. AISI is using that base to direct grant funding, provide compute access, and run ongoing mentorship to move alignment from theory into practice.

How to turn this into engineering outcomes

  • Stand up a risk-tiered eval pipeline for every model update: capability, safety, and regression gates before deployment.
  • Instrument your LLM stack with incident capture, unsafe-output alerts, and auto-rollback for high-risk regressions.
  • Scope agent autonomy: sandbox tools, enforce least-privilege, add budget/time caps, and require human sign-off for high-impact actions.
  • Adopt red-teaming as a service function in your org; set quarterly attack goals and track fix rates like SLOs.
  • Publish model and system cards internally; require evidence (evals, test logs) for change approval.

Timeline and next steps

  • First 60 projects funded now across 8 countries.
  • Second grant round opens this summer; teams should prepare proposals with clear eval plans, risk controls, and measurable outcomes.

Further learning and practical skills


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)