SAFEXPLAIN Project Debuts Core Demo for Safe, Explainable AI in Space, Automotive, and Rail Systems

The SAFEXPLAIN project launched a Core Demo showcasing safe, explainable AI for critical sectors like space, automotive, and rail. The platform enables AI integration with built-in monitoring and transparency.

Categorized in: AI News IT and Development
Published on: Jul 04, 2025
SAFEXPLAIN Project Debuts Core Demo for Safe, Explainable AI in Space, Automotive, and Rail Systems

SAFEXPLAIN Project Launches Core Demo for Safe and Explainable AI

Barcelona, 03 July 2025 — The SAFEXPLAIN project, coordinated by the Barcelona Supercomputing Center (BSC) and involving six interdisciplinary partners, has introduced its Core Demo. This open software platform demonstrates how safe, explainable, and certifiable AI can be applied in critical sectors such as space, automotive, and rail.

During a recent webinar hosted by BSC and exida development, the project partners presented the Core Demo walkthrough along with detailed slides. These resources are now publicly accessible, offering the community a clear view of SAFEXPLAIN’s comprehensive approach to AI safety and explainability.

Addressing the Challenges of Trustworthy AI in Critical Systems

AI is increasingly responsible for complex tasks in safety-critical environments—from autonomous driving to satellite operations. Yet, most AI systems today operate as “black boxes” that are difficult to verify or trace, which conflicts with the strict certification requirements in fields like transportation and healthcare.

SAFEXPLAIN tackles this issue by providing a platform that integrates AI/ML components safely into critical systems. Its approach aligns with emerging standards such as SOTIF (Safety of the Intended Functionality) by enabling runtime monitoring and explainability built into the design.

What the Core Demo Shows

The Core Demo is a modular, small-scale system illustrating key SAFEXPLAIN technologies. It highlights how AI can influence decisions without being the sole decision-maker, ensuring safety by construction without sacrificing system performance.

Its flexible architecture consists of interchangeable building blocks for Inference, Supervision, and Diagnostics. This design allows easy adaptation to various scenarios.

  • Space: AI supports docking maneuvers between spacecraft, handling target identification and pose estimation while the system retains overall control.
  • Automotive: The demo envisions AI assisting in braking systems.
  • Rail: The system aids in detecting anomalies in signaling and onboard diagnostics.

In the space case shown in the webinar, the AI component partially affects decision-making during docking, while the SAFEXPLAIN platform actively monitors for sensor faults and system drifts.

Next Steps: From Demo to Full-Scale Applications

The webinar recording and presentation slides are available online for review. Early adopters and interested professionals are encouraged to test the Core Demo in their own environments by reaching out to safexplainproject@bsc.es.

Looking ahead, the project will release fully developed end-to-end demonstrators covering space, automotive, and rail domains. These will feature Operational Design Domain (ODD)-based scenarios and comprehensive test suites. The final releases are scheduled for September 2025 during the event “Trustworthy AI In Safety-Critical Systems: Overcoming adoption barriers.” Registration is open until 8 September 2025 or until venue capacity is reached.

For more details and to access the Core Demo webinar and materials, visit the official project website or contact safexplainproject@bsc.es.