OpenAI and Microsoft join UK coalition to secure AI, backing £27m alignment fund and 60 projects

OpenAI and Microsoft join the UK's AI Safety Institute, backing its Alignment Project as AI moves into public services. Fund hits £27m, with 60 grants and a new round this summer.

Categorized in: AI News IT and Development
Published on: Feb 20, 2026
OpenAI and Microsoft join UK coalition to secure AI, backing £27m alignment fund and 60 projects

OpenAI and Microsoft join UK coalition to secure AI development

OpenAI and Microsoft have joined the UK's AI Safety Institute (AISI) initiative to boost trust in AI as it moves deeper into public services and national infrastructure. The announcement, made by UK Deputy Prime Minister David Lammy and AI Minister Kanishka Narayan as the AI Impact Summit in India wraps up, strengthens AISI's Alignment Project.

The fund now stands at £27 million, including £5.6 million from OpenAI and additional support from Microsoft and others. The first Alignment Project grants have been awarded to 60 projects across 8 countries, with a second round opening this summer.

What "alignment" means-and why it matters

AI alignment is about steering advanced systems to act as intended and preventing harmful behavior as capabilities grow. Progress here builds public confidence and supports real outcomes: higher productivity, shorter medical scan times, and new jobs across the country.

Without continued alignment research, more capable models could behave in ways that are hard to predict or control. That's a risk for both safety and governance at scale.

Learn more about the UK AI Safety Institute.

Key takeaways for IT and development teams

  • Expect tighter evaluation and red-teaming requirements. Build automated test suites for jailbreaks, prompt injection, tool-use abuse, and long-horizon failure modes.
  • Define behavioral guarantees up front. Specify acceptable use, capability thresholds, fallback behaviors, and human-in-the-loop gates for sensitive actions.
  • Prioritize practical alignment methods: preference optimization, rule-based guardrails, content filters, tool-permission sandboxes, and post-deployment monitoring.
  • Harden data pipelines. Track consent and provenance, reduce biased signals, and log interventions for auditability.
  • Ship with documentation: model/system cards, risk registers, threat models, and incident playbooks-kept current with every release.
  • Watch for grant opportunities. With 60 projects already funded and a new round coming this summer, proposals tied to evaluation methods, interpretability, and safe scaling will be well placed.

What leaders are saying

David Lammy, UK Deputy Prime Minister: "AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset. We've built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology. The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort."

Kanishka Narayan, UK AI Minister: "We can only unlock the full power of AI if people trust it - that's the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on. With fresh backing from OpenAI and Microsoft, we're supporting work that's crucial to ensuring AI delivers its huge benefits safely, confidently and for everyone."

How to prepare now

  • Run pre-deployment safety tests alongside functional tests. Treat safety regressions like P0 bugs.
  • Instrument production. Capture prompts, outputs, tool calls, and policy violations with privacy-aware logging and clear retention rules.
  • Adopt recognized practices and standards for AI risk management in your SDLC. Build a simple governance loop: assess → mitigate → monitor → review.
  • Plan capability controls. Rate limit high-risk tools, gate external actions, and introduce staged access as confidence improves.
  • Align your roadmap to the coming grant round: propose work on evaluations, specification design, interpretability, or post-deployment oversight.

For ongoing coverage of alignment-focused funding and methods, see Research.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)