Global Alliance Unites to Tackle AI Safety and Alignment Challenges
The AI Security Institute and global partners launch a coalition to advance AI safety and alignment research. Funded by the UK, it supports grants, compute access, and venture capital.

AI Security Institute Launches Global Coalition to Secure AI Development
The AI Security Institute has partnered with international organizations, including its Canadian counterpart, Amazon, Anthropic, and civil society groups, to initiate a new research project focused on AI behaviour and control. This coalition aims to address critical issues related to AI safety, security, and human oversight, supporting the UK government’s Plan for Change by establishing strong foundations for AI development.
Guided by a prestigious advisory board featuring Turing Award winners Shafi Goldwasser and Yoshua Bengio, the project will advance global AI alignment research. Its goal is to ensure AI systems behave predictably and as intended, helping unlock AI’s benefits while reinforcing national security.
What is AI Alignment?
AI alignment focuses on ensuring that AI technologies act in accordance with human interests and values. It seeks to detect and eliminate behaviours that could pose risks to society. The UK government has committed over £15 million to fund this initiative, reinforcing its position as a leader in AI safety research and international collaboration.
International Collaboration and Support
The Alignment Project is led by the UK’s AI Security Institute and backed by a coalition including the Canadian AI Safety Institute, Canadian Institute for Advanced Research (CIFAR), Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA). This partnership reflects a shared global responsibility to tackle one of AI’s most pressing technical challenges.
Research Focus and Funding Structure
This project will support pioneering research aimed at keeping AI aligned with human goals as AI systems grow more capable. It will explore ways to maintain transparency and human control over AI decision-making processes. Given the pace of AI advancements, current control methods may soon prove insufficient, making coordinated international action essential.
The project offers three main forms of support:
- Grant funding: Up to £1 million for researchers across disciplines, from computer science to cognitive science.
- Compute access: Up to £5 million in cloud computing credits from AWS to enable large-scale technical experiments.
- Venture capital: Private investment to accelerate commercial solutions for AI alignment challenges.
By combining funding, infrastructure, and market incentives, the coalition aims to overcome barriers that have limited progress in AI alignment.
Voices from the Coalition
Science, Innovation and Technology Secretary Peter Kyle highlighted the urgency: “Advanced AI systems already outperform humans in some areas, so driving research to ensure these systems act in our interests is vital. The Alignment Project will make AI more reliable and trustworthy, supporting economic growth and national security.”
Geoffrey Irving, Chief Scientist at AI Security Institute, emphasized the challenge: “Misaligned, capable AI systems could behave unpredictably with serious consequences. This project brings together governments, industry, philanthropists, and researchers to close critical gaps in alignment research.”
Jack Clark, Co-Founder of Anthropic, added: “Improving understanding of how AI systems work is urgent. We’re pleased to collaborate on this project to focus on these issues.”
Nora Ammann, Technical Specialist at ARIA, noted the importance of mathematically rigorous assurances for AI systems, complementing ongoing work in AI safety.
John Davies, Managing Director at Amazon Web Services, remarked: “Providing free cloud computing credits will help researchers run experiments to test AI safety, fostering collaboration across sectors.”
The Honourable Evan Solomon, Canadian Minister, said: “This partnership reflects a commitment to responsible AI development that will benefit both our economies and societies.”
Mark Greaves, Executive Director at Schmidt Sciences, stated: “AI alignment is a scientific challenge requiring cross-disciplinary effort. This project attracts top researchers and new talent to meet it.”
Professor Charlotte Deane, Executive Chair at EPSRC, explained: “This partnership connects fundamental research with practical AI safety challenges, strengthening the UK’s AI ecosystem.”
Implications for AI Development and Research
With the rapid development of AI models demonstrating expert-level knowledge, ensuring their behaviour aligns with human values is critical. The Alignment Project’s funding and resources will enable researchers to experiment on a scale previously unreachable, fostering breakthroughs in transparency and control.
As AI becomes increasingly integrated into public services, business, and security, the coalition’s work will help build trust in these technologies. This effort not only supports AI innovation but also addresses the safety concerns that could hinder broader adoption.
For professionals in IT, development, and AI research, the Alignment Project represents a significant opportunity to contribute to foundational AI safety work. Access to grants, compute resources, and venture support can accelerate progress in this essential area.
Learn more about AI alignment and related research opportunities at Complete AI Training.