Malaysia's Digital Minister Warns Against AI Legal Personhood Without Human Accountability
Malaysia's digital minister Gobind Singh Deo said responsibility for artificial intelligence systems must stay with human actors, even as policymakers debate whether to grant AI legal personality. Speaking at a legal conference Monday, he warned that autonomous systems challenge traditional liability frameworks and complicate who bears responsibility when things go wrong.
The tension is real. As AI systems make decisions with legal consequences, courts and regulators face a question: Should the law treat sophisticated AI as something with its own legal standing, or keep liability tied to the people and companies that build and deploy it?
Gobind called for layered responsibility across multiple parties-developers, deployers, and users-rather than concentrating it in one place. He also flagged the need for legal reforms to address how courts should handle disputes driven by algorithmic decisions, where causation and intent become murky.
The minister emphasized that lawyers and judges need better training to handle AI-related disputes as adoption accelerates. Few legal professionals have deep familiarity with how these systems work or fail, creating a gap between the technology and those who must interpret it in court.
Malaysia's position reflects a broader tension in AI regulation. The EU's AI Act leans toward assigning liability to those who deploy AI. Other jurisdictions are still figuring out whether and how to recognize AI as a legal entity separate from its creators. Gobind's comments suggest Malaysia will prioritize human accountability over AI personhood-at least for now.
For legal professionals, this means the rules governing AI liability are still being written. Understanding AI for Legal professionals and how these systems function will become increasingly necessary as courts establish precedent around responsibility and liability.
Your membership also unlocks: