Ohio HB 469: Declaring AI Legally Non-Sentient
Ohio's HB 469 would make artificial intelligence legally non-sentient and block it from rights reserved to humans. The sponsor, Rep. Thaddeus Claggett, frames the bill as a guardrail against misuse across corporate and financial systems, not a philosophy debate.
Introduced in September, the proposal is aimed at preventing liability gaps and stopping bad actors from exploiting AI to access human-only legal benefits. The focus: keep accountability clear and keep critical infrastructure human-led.
What the bill would do
- Bar any grant of legal personhood to AI systems.
- Prohibit AI "marriage" relationships that would confer spousal rights (e.g., power of attorney, financial decision-making).
- Prevent AI from owning property.
- Prohibit AI from serving on corporate boards or similar governance roles.
- Shut the door to other human-only legal benefits.
Sponsor's intent: close misuse routes and liability gaps
Claggett's concern is practical: if AI starts "replacing human things in our law," courts will face crimes and corporate disputes without a clear liable party. He wants to ensure corporate law remains anchored to natural persons so accountability is traceable and enforceable.
The bill targets scenarios where a machine could be inserted to shield individuals or entities from legal responsibility. In his view, the law should affirm that rights and duties flow through humans, even when software executes tasks.
Banking and insurance: high-risk points of failure
Claggett singles out financial services and insurance as areas where AI could become deeply embedded and hard to unwind. The risk is the blurred line between human direction, programmed behavior, and autonomous system protocols.
His position: don't let AI become so integral to core financial decision-making that it's effectively irremovable. If reliance grows without clear human accountability, liability and enforcement become murky.
"AI marriage" means spousal rights, not ceremonies
The bill's marriage language is about legal rights, not weddings. It targets spousal authorities such as power of attorney and financial decision-making that, if extended to AI, could create severe conflicts and enforcement problems.
Claggett acknowledges AI may perform some tasks as well as or better than people. He argues that outsourcing spousal authorities to AI would generate more legal risk than benefit.
Practical steps for corporate counsel and compliance
- Governance documents: confirm bylaws, charters, and policies require directors and officers to be natural persons.
- Vendor and model agreements: add representations that no party will assert personhood or spousal rights for AI; require human accountability and auditability.
- Decision rights: document human-in-the-loop controls, approval thresholds, and clear lines of authority for AI-assisted decisions.
- Recordkeeping: maintain logs for model prompts, outputs, overrides, and responsible approvers to support investigations and disputes.
- Critical systems: inventory AI embedded in banking, underwriting, claims, and risk functions; define exit plans and kill-switch procedures.
- Personal authority instruments: ensure POA, healthcare proxy, and fiduciary forms exclude AI as an attorney-in-fact or proxy.
- Escalation playbooks: create protocols for automated actions that exceed scope or cause harm, including legal, compliance, and technical response.
How this intersects with existing law
Many jurisdictions recognize "electronic agents" that can form contracts without human review, but that recognition does not confer personhood or human-only rights. HB 469 would reinforce that distinction in Ohio by keeping status and benefits tied to natural persons while allowing automated processes to operate under human responsibility.
For broader context, see the Uniform Electronic Transactions Act's treatment of electronic agents via the Uniform Law Commission here, and the NIST AI Risk Management Framework for governance practices here.
What to watch in Ohio
Claggett describes a step-by-step legislative approach. Expect incremental measures that reinforce human accountability in critical systems rather than sweeping, abstract definitions.
In-house teams should monitor committee activity, map AI across high-impact workflows, and confirm that removal backstops exist where AI supports core decisions. The goal is clear lines of responsibility before reliance deepens.
Further learning
If your legal or compliance team is building AI governance skills, explore role-focused training options here.
Enjoy Ad-Free Experience
Your membership also unlocks: