Should AI Have Legal Rights? The Law Commission Weighs In on the Future of Artificial Intelligence and Liability

Should AI be granted legal personality to clarify liability for its decisions? The Law Commission says AI isn’t yet advanced enough but this debate will resurface soon.

Categorized in: AI News Legal
Published on: Aug 01, 2025
Should AI Have Legal Rights? The Law Commission Weighs In on the Future of Artificial Intelligence and Liability

Should AI Be Granted ‘Legal Personality’?

A recent report from the Law Commission has raised an intriguing question: should artificial intelligence be granted a separate legal personality? While this might sound like science fiction, the concept itself isn’t new. Legal personality already applies to various entities, and who qualifies has shifted over time.

Legal personality involves a bundle of rights and obligations, such as owning property, entering contracts, and suing or being sued in one’s own name. However, not all legal persons carry the same rights. For example, corporations have different rights and responsibilities compared to natural persons.

When AI Takes on Agentic Roles

As AI systems become more capable of autonomous reasoning and decision-making, questions arise about whether they fit into this framework. Consider corporations using AI to make crucial decisions or interpret contracts and regulations. If AI actively influences legal decisions, should it be formally recognized as a legal person?

This recognition could reshape accountability. For instance, law firms currently bear full responsibility for their actions, even when using AI tools. But if an AI system had legal personality, could firms shift liability onto the AI? And what about the vendors behind these AI systems — would they accept this new level of risk?

Arguments For and Against Granting Legal Personality to AI

The Law Commission identifies liability coverage as a central argument supporting AI legal personality. Granting it could clarify who is responsible when AI-driven decisions cause harm or disputes. On the other hand, the existing legal framework holds human actors accountable, which maintains clear lines of responsibility.

At present, the Commission finds that AI systems have not reached a level of sophistication that justifies granting them legal personality. But with AI technology advancing steadily, this question is likely to resurface in the near future.

Looking Ahead: Legal Evolution and AI

Sir Peter Fraser, Chair of the Law Commission, highlights that AI’s growth in diverse applications—from automated driving to health diagnostics—is significant. While the benefits of AI are considerable, so too are the potential risks. The law must adapt accordingly to manage these emerging challenges.

The Commission’s report aims to raise awareness and spark discussion on how AI might impact the legal landscape, paving the way for future reforms if necessary.

What This Means for Legal Professionals

  • Stay informed about AI developments and legal debates surrounding its role.
  • Consider implications for liability and accountability when using AI tools in practice.
  • Engage in discussions on law reform related to AI to help shape practical and fair outcomes.

For legal professionals interested in expanding their understanding of AI’s impact on law and practice, resources like Complete AI Training's latest courses offer practical guidance on AI applications in legal contexts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)