Levels of intelligence: From robotic arms to autonomous cars
Artificial intelligence (AI) is now embedded in everyday life, powering everything from smartphones to industry-critical systems. As AI advances, categorizing its different forms helps clarify capabilities, set expectations, and address safety and regulatory needs. While no universal AI classification exists yet, examining current frameworks and conceptual models reveals how we might structure this understanding.
A proven model: SAE Levels of driving automation
The automotive industry offers a clear example with the SAE J3016 standard, defining six levels of driving automation. This framework guides engineers, regulators, and users by differentiating system capabilities and human responsibilities.
- Level 0: No Driving Automation – The human driver does everything; some safety features might intervene momentarily but don't control driving.
- Level 1: Driver Assistance – The system assists with steering or speed control, but the driver supervises and manages all else.
- Level 2: Partial Driving Automation – The system handles steering and speed simultaneously under conditions; the driver must stay alert and ready to intervene.
- Level 3: Conditional Driving Automation – The system manages all driving tasks within defined conditions, allowing the driver to disengage but requiring readiness to take control if needed.
- Level 4: High Driving Automation – The system fully controls driving and fallback within its operational domain; no driver attention is needed in those areas.
- Level 5: Full Driving Automation – The system drives autonomously under all conditions without human input; currently theoretical for widespread use.
This classification has clarified expectations and accelerated regulatory discussions for autonomous vehicles, proving valuable in managing complex AI systems.
Could there be a universal AI level standard?
Inspired by the SAE model, the idea of a universal “Levels of AI” framework could help consumers and industries understand AI products better. Such standards might indicate AI capabilities and limits, aiding transparency and safety.
International bodies like ISO, through initiatives like ISO/IEC JTC 1/SC 42, are working on AI standards, but creating a single leveling system is challenging. Intelligence varies widely—from rule-based robots to complex diagnostic AI—making linear measurement difficult. Plus, AI evolves fast, risking outdated classifications.
Questions remain about who defines these levels and how systems get certified. Despite the hurdles, a structured framework could improve clarity and trust.
A conceptual 10-level AI framework
Here’s a practical way to think about AI capabilities, from simple automation to advanced speculative forms. This is a conceptual model, not an official standard.
- 1. Rule-Based Systems
AI operates on fixed rules set by humans.
Examples: Industrial robotic arms, simple automated guided vehicles, basic inventory checks. - 2. Context-Based Systems (Context-Aware AI)
AI adapts to its environment.
Examples: Collaborative robots slowing near humans, autonomous mobile robots in warehouses, smart thermostats. - 3. Narrow Domain AI (Artificial Narrow Intelligence – ANI)
AI specialized for specific tasks; all current AI fits here.
Examples: Autonomous cars, Siri, Alexa, medical image analysis AI, recommendation systems. - 4. Reasoning AI
AI capable of logical inference and decision-making beyond rules.
Examples: Autonomous vehicles making complex navigation choices, logistics optimization AI, advanced diagnostic tools. - 5. Self-Aware Systems
Hypothetical AI with consciousness and self-understanding.
Fictional examples: HAL 9000 (2001: A Space Odyssey), Skynet (Terminator). - 6. Artificial General Intelligence (AGI)
AI with human-like cognitive flexibility across tasks.
Fictional examples: Data (Star Trek), Ava (Ex Machina), R2-D2 and C-3PO (Star Wars). - 7. Artificial Superintelligence (ASI)
AI surpassing human intelligence in virtually all areas.
Fictional examples: The Machines (The Matrix), V.I.K.I. (I, Robot). - 8. Transcendent AI
Highly speculative AI beyond human comprehension.
Fictional examples: Samantha in Her, advanced intelligence in Transcendence. - 9. Cosmic AI
Theoretical AI with cosmic-scale understanding.
Fictional examples: The Monolith intelligence (2001: A Space Odyssey), pan-galactic AI in sci-fi literature. - 10. Godlike AI
Speculative AI with apparent omnipotence or omniscience.
Fictional examples: Q (Star Trek).
This hierarchy spans practical AI to speculative extremes, emphasizing why nuanced classification matters.
How do we measure intelligence?
There’s no universal unit to measure intelligence like meters for length. Human IQ scores are relative, comparing individuals statistically rather than providing absolute intelligence values.
Measuring AI intelligence numerically is even more complex. Instead of a single score, categorical levels based on capabilities—like learning flexibility, reasoning, and autonomy—offer a clearer approach. Technical metrics such as parameter counts or benchmark performance capture parts of intelligence but miss the full picture.
Structured categorization balances clarity and nuance, making it a practical path forward.
Why categorize AI?
Clear AI categories can improve safety by informing users about system limits, especially when AI interacts with the physical world or critical decisions. Transparency builds trust, helps manage expectations, and supports accountability when AI causes harm.
For developers, categories encourage ethical reflection on the implications of the AI they build. Governments and regulators could use such frameworks to tailor oversight according to AI capability and impact, especially in sensitive sectors like healthcare or finance.
However, defining objective levels across diverse AI types is difficult. Rapid AI advances risk making fixed categories obsolete. Oversimplified labels might misrepresent strengths and weaknesses. Achieving global consensus is another major challenge.
Regulatory use of AI levels raises questions about control and innovation balance. While safety and ethics are critical, excessive restrictions could hamper beneficial development or widen international disparities.
As AI becomes more entwined with society, ongoing dialogue about its classification and governance will remain essential to maximizing benefits while minimizing risks.
Your membership also unlocks: