Who Counts as a Legal Person? Embryos, AI, Corporations, and a Better Standard

States say embryos count, AI doesn't, but corporations still do, so genetics isn't enough. An interest-based standard keeps human accountability and denies AI personhood for now.

Categorized in: AI News Legal
Published on: Nov 25, 2025
Who Counts as a Legal Person? Embryos, AI, Corporations, and a Better Standard

Legal Personhood of Potential People: AI and Embryos

Two western states now say embryos are people and AI is not. Their through-line is simple: genetics equals personhood. If you're Homo sapiens, you're in. If you're code, you're out.

There's a problem. Corporations have legal personhood too-and they're not human. That inconsistency signals a shaky foundation for policy, litigation, and public trust.

A cleaner approach exists: an interest analysis framework that looks at the interests of the entity, natural persons, and society. Applied correctly, it explains why embryos and AI don't qualify for personhood today-while corporations do.

I. The limitation of genetics as legal personhood

A. Pro-embryo personhood laws

Post-Dobbs, states accelerated prenatal personhood in criminal and civil contexts. Idaho's code defines "unborn child" as a human organism from fertilization to live birth and treats fetal harm as homicide. Utah prohibits abortion after 18 weeks and equates an "unborn child" with a "human being" in homicide statutes, while declaring state policy that "unborn persons" have an equivalent right to life.

The explicit rationale: genetics. If an entity is biologically human, it counts as a person under law-or is treated close to it.

B. Anti-AI personhood laws

Idaho was first to bar personhood for AI (and for animals, water, and other nonhumans). Utah followed with a broad prohibition covering AI, land, weather, plants, animals, and more.

Legislators framed this as protecting human status: "a person is a person, and a tree is not." The subtext: AI may look human-like one day, but it is not Homo sapiens-so it can't be a person.

C. The inconsistency: embryos, AI, and corporate entities

If "only genetic humans are persons," how do corporations fit? Both states still recognize corporations as legal persons and carve them out as explicit exceptions in the anti-AI statutes.

That carveout concedes the core flaw: genetics alone can't explain personhood in law. You need a theory that fits embryos, AI, and corporations without ad hoc exceptions.

II. Interest analysis theory and personhood for embryos and corporate entities

A. The interest analysis theory

Under this framework, entities qualify for personhood in two ways:

  • Natural personhood: biologically human and born alive (assumed to have a stake in their own welfare).
  • Juridical personhood: nonhuman entities can qualify if (1) they have morally relevant interests (e.g., sentience, consciousness, capacity to suffer), or (2) recognizing their personhood advances the interests of natural persons and society, with rights tailored and limited to avoid infringing human rights.

B. Embryos under interest analysis

Embryos are not natural persons under this framework because they're not born and do not have a conscious stake in survival. They also lack morally relevant interests: no sentience, no capacity to suffer, no interaction.

Societal interests don't tip the scales either. Recognition would be largely symbolic and would directly burden the rights and healthcare access of pregnant people and those seeking fertility care.

C. Corporate entities under interest analysis

Corporations are the model case for juridical personhood. Their recognition helps real people pool capital, limit liability, and transact. It also benefits society by enabling economic activity and giving injured parties a solvent defendant.

Crucially, corporate rights are tailored and limited to serve those interests-not to mirror human rights wholesale.

III. Personhood for AI under interest analysis

A. AI does not have morally relevant interests

Modern AI systems execute tasks from training data. They lack consciousness, self-awareness, and sentience. No morally relevant interests, no juridical personhood-at least for now.

If an artificial general intelligence were to develop subjective experience or the capacity to suffer, this framework could reassess. But that hypothetical can't justify personhood today.

B. Granting AI personhood may harm natural people

AI can defame, infringe, and malfunction. If AI becomes a legal person, developers and deployers can try to shift liability onto the AI "entity," limiting recourse against those with actual control and deep pockets.

Proposals to fund AI with assets, bonds, or insurance sound neat, but they create undercapitalization risk and a liability shell. Piercing the veil is rare. Injured parties need access to the companies-not a thinly financed software defendant.

C. Granting AI personhood may harm society

Some argue personhood would spur investment. Reality check: investment is already surging without it. Shielding developers from liability pushes risk onto the public and encourages premature releases.

Legislatures can address accountability directly. California now bars defendants from arguing "the AI acted autonomously" to escape liability, keeping responsibility where it belongs.

Conclusion

Genetics-based personhood produces contradictions. It excludes AI and includes embryos, yet makes an exception for corporations. That's not a principle-it's a patch.

Interest analysis offers a consistent, workable standard. Corporations qualify as juridical persons because their personhood benefits people and society. Embryos and current AI do not. And granting personhood to AI would weaken accountability and public safety.

Bottom line for legal teams: hold the line on human accountability, tailor rights to actual interests, and avoid personhood labels that enable liability arbitrage.

Practical notes for lawmakers, litigators, and in-house counsel

  • Define AI liability in statute and contracts. Bar "autonomous AI" defenses and assign responsibility to developers, deployers, and owners.
  • Resist AI personhood. If compensation pathways are unclear, fix liability rules-don't create a new rights-bearer.
  • Tailor any AI duties to specific risks (defamation, IP, safety-critical systems) and require insurance at the firm level, not the model level.
  • Use the interest analysis test for all personhood debates to avoid ad hoc carveouts that undermine credibility.

References worth bookmarking:

If your legal team needs structured AI upskilling by role, see our courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide