CMU Researchers Build AI System to Predict Airport Collisions
Researchers at Carnegie Mellon University's Robotics Institute have developed World2Rules, an AI system that identifies potential aircraft collisions and explains the safety violations behind them. The system analyzes real airport data to spot dangerous patterns before incidents occur.
The work was prompted by near misses at major airports, including a recent incident at New York's John F. Kennedy International Airport. Jack Wang, a master's student on the project, said runway incursions have escalated in frequency. "Sometimes they're minor, but sometimes they can be quite catastrophic," he said.
How It Works
World2Rules combines neural and symbolic AI approaches. The neural component identifies patterns buried in airport data. The symbolic component converts those patterns into explicit, human-readable safety rules.
When the system detects a potential violation, it does more than trigger an alert. It identifies which specific safety rule is being broken and explains why the scenario matches known patterns of danger.
The team built the system using the Amelia-42 dataset, which contains two years of Federal Aviation Administration airport surface movement data from 42 U.S. airports. The dataset tracks aircraft and vehicle movement across runways and taxiways and includes both normal operations and documented crashes and incidents. Processing the data required the Bridges-2 supercomputer at the Pittsburgh Supercomputing Center.
Practical Application
World2Rules is designed to integrate into existing collision-prediction systems rather than replace them. Air traffic controllers or automated systems could receive earlier, clearer warnings of potential dangers, giving pilots and controllers critical extra moments to react.
The system learns patterns such as aircraft occupying the same runway simultaneously and applies those rules to predict future risky scenarios.
Beyond Aviation
Sebastian Scherer, an associate research professor in the Robotics Institute and head of the AirLab, said the technology extends beyond aviation. "The system can be adapted to different environments by teaching it the relevant rules and behaviors for that domain," Scherer said. "Once that information is defined, the same core technology can learn and monitor safety risks without needing to be redesigned."
The team presented their results at the NASA Formal Methods Symposium in Los Angeles in May 2026.
For IT professionals building safety-critical systems, this approach demonstrates how to combine pattern recognition with interpretable rule generation. See more on AI for IT & Development.
Your membership also unlocks: