Using AI in aerospace software without breaking DO-178C
AI can speed up development and reduce repetitive work. But in safety-critical aerospace software, certification integrity, determinism and traceability come first-every time.
The good news: AI can help. The catch: you must keep humans in control and the build predictable.
How AI fits into DO-178C
Treat AI as a skilled assistant, not an autonomous source of certified code or documentation. It's useful for exploring options, surfacing inconsistencies and drafting material you'll later verify. The responsibility for every decision still sits with human engineers.
Used this way, AI accelerates work without putting certification at risk.
Where AI helps most (and safely)
The early lifecycle offers the cleanest wins. AI can propose ways to decompose high-level requirements into precise software requirements, suggest modular architectures that improve partitioning and redundancy, and flag potential single points of failure.
Example: if a requirement says "The system must detect sensor failure," AI can outline detection strategies, timing constraints and safe-state transitions. You still validate those choices and document the rationale for reviews.
AI-generated code is held to the same standard
From a DO-178C perspective, AI output is no different than human output. It must be explainable, reviewed and fully verified. Every line of code must:
- Trace back to a verified low-level requirement
- Comply with approved coding standards
- Be reviewed and understood by independent engineers
- Be verified through deterministic testing
If you're exploring code generation, see practical guidance on treating AI output like developer-produced code: Generative Code.
Verification support (with strict validation)
AI can suggest unit tests from requirements, propose boundary-value cases and analyze logs to reveal coverage gaps or anomalies. That saves time, especially on large codebases.
But nothing skips independent validation. Keep AI suggestions explainable, reproducible and backed by evidence.
Hard limits you should not cross
- AI does not make final safety-critical decisions
- AI does not modify flight-critical code without human review
- AI does not replace formal verification activities
- AI does not obscure deterministic behavior
Predictability and explainability are non-negotiable. Regulators will expect proof.
DAL levels, DO-330 and tool use
As Design Assurance Levels move from E to A, verification rigor and independence rise. Trust in automated tools drops accordingly, especially at DAL A and B. Most teams classify AI as a development tool and treat its output as advisory.
To avoid complex tool qualification under DO-330, keep AI-generated artifacts outside the unverified build chain. Lock model versions, archive prompts and responses, and place AI systems under configuration control to preserve determinism and reproducibility.
For context on DO-178C acceptance, see FAA AC 20-115D: Use of RTCA DO-178C. For tool qualification guidance, review RTCA DO-330.
Controls that keep you cert-ready
- Review all AI-generated code line by line
- Never accept AI output without human verification
- Maintain end-to-end traceability from requirements to test evidence
- Preserve deterministic builds and reproducibility
- Keep approvals and accountability with human engineers
- Document AI usage, model versions and prompts as part of your configuration records
- Ensure independence for reviews and verification consistent with your DAL
Treat AI like a fast junior engineer
Think of AI as a quick, tireless helper that still needs supervision. Aerospace workflows already emphasize independent review, which is exactly what keeps AI safe to use.
The workload shifts from writing first drafts to reviewing, refining and justifying-without relaxing any certification bar.
Practical adoption roadmap
- Define a written policy for AI use (scope, data handling, review expectations, DAL limits)
- Select low-risk use cases first: requirement elaboration, boilerplate code, test suggestions, log analysis
- Standardize prompts and templates to improve consistency and traceability
- Lock and version AI models; archive prompts and outputs in your CM system
- Train teams on risks, verification discipline and tool boundaries: AI Learning Path for Software Developers
- Pilot on lower DAL systems or non-flight-critical components; expand only after audits
- Continuously review evidence quality and tighten controls before scaling
Bottom line: AI can speed the work, but certification earns the final say. In aerospace, trust is earned through evidence, not intelligence.
Your membership also unlocks: