Singapore Consensus Sets Global Priorities for AI Safety Research and Trustworthy Innovation

The 2025 Singapore Conference on AI identified key safety research priorities focusing on trustworthy development, risk assessment, and post-deployment control. This consensus guides global efforts to ensure AI evolves responsibly and securely.

Categorized in: AI News Science and Research
Published on: Jun 30, 2025
Singapore Consensus Sets Global Priorities for AI Safety Research and Trustworthy Innovation

The Singapore Consensus on Global AI Safety Research Priorities

Abstract and Key Insights

AI capabilities and autonomy are advancing quickly, offering transformative potential but also raising critical questions about safety. Ensuring AI systems are trustworthy, reliable, and secure is essential for building public confidence and enabling innovation without triggering backlash. The 2025 Singapore Conference on AI (SCAI) gathered international AI scientists to identify key research priorities in AI safety, producing a report that complements the International AI Safety Report supported by 33 governments.

The report organizes AI safety challenges into three main categories following a defence-in-depth approach:

  • Development: Creating trustworthy AI systems.
  • Assessment: Evaluating risks and behaviors of AI.
  • Control: Monitoring and intervening post-deployment.

This structure highlights how AI safety techniques interact. We aim to make systems behave as intended by combining design (Development), evaluation (Assessment), and control (Control). Some assessment tools support both design and control phases, and distinctions arise depending on what is considered part of the AI system versus external control mechanisms. For example, an external filter blocking unsafe queries might be seen as part of the system or a control layer depending on perspective.

Artificial General Intelligence (AGI) can be understood as the intersection of three properties: Autonomy, Generality, and Intelligence. Narrow AI like AlphaFold excels in intelligence within a specific domain but lacks autonomy and generality. Other systems may have high autonomy but limited generality or intelligence. Such systems are easier to manage. Hypothetical future AI, like highly autonomous self-driving cars, carry nuanced risks but may pose lower loss-of-control concerns due to their constrained domains.

Building a Trustworthy, Reliable, and Secure AI Ecosystem

Creating a trusted AI ecosystem is vital to support innovation and gain public acceptance. The report stresses a structured approach covering the entire AI lifecycle: from development through risk assessment to ongoing control after deployment. This approach balances enabling progress with managing potential harms effectively.

Conclusion

The Singapore Consensus outlines clear research priorities to guide AI safety efforts globally. It provides a foundation for aligning AI advancements with societal values and safety standards, ensuring that AI technologies evolve responsibly.