Thirty Years of Original Sin in Digital and AI Governance
Two moves on 8 February 1996 set a precedent we're still living with. One declared the internet beyond the reach of states. The other excused platforms from basic legal responsibility for content. Together, they taught an industry to outrun the law-and to expect an exception.
The "independence" that never existed
The claim was bold: leave cyberspace alone; governments have no sovereignty here. It sounded liberating. It was a mirage.
There is no separate space. Every request, post, or model output is a physical event crossing cables, routers, and servers under the jurisdiction of actual countries. Pretending otherwise was a call to opt out of legal and ethical traditions that exist to manage human risks.
The shield that rewrote incentives
On the same day, the US Communications Decency Act created Section 230-a liability shield for platforms that host user content. It made sense to incubate a fragile industry. It also normalized the idea that a company could profit from activity on its service without bearing responsibility for foreseeable harms.
Decades later, these companies are among the largest in history, yet the shield largely stands. We wouldn't let a newspaper print anything and claim it isn't the publisher. We shouldn't leave AI platforms in a similar gap.
47 U.S.C. ยง 230 (Cornell LII)
Two ideas that fed each other
The myth of a borderless internet gave cover to exceptional legal treatment. If this is a "new world," why use "old" rules? That framing stuck.
It was challenged from day one. Judge Frank Easterbrook argued we don't need "internet law" any more than we needed "law of the horse." Apply existing principles to new facts. Thirty years on, he looks prescient: the issues are tort, contract, product liability, competition, and constitutional rights-not magic.
Easterbrook, Cyberspace and the Law of the Horse (U. Chicago)
The AI twist: immunity without responsibility
AI companies now rely on the same logic: we build the system; others produce the content. That stance turns a structural advantage into a societal cost. A model can amplify hatred, turbocharge misinformation, or worsen self-harm risk-yet the legal risk often sits with users, victims, or small intermediaries.
Other industries don't get that grace. Cars have recalls. Drugs have warnings and liability. Software that increasingly mediates speech, work, health, and safety should not be an exception.
Back to first principles: accountability for foreseeable harms
The fix is simple to state and careful to implement: if you create, operate, and profit from a technology, you're accountable for its foreseeable impacts. That doesn't freeze innovation. It aligns incentives so speed doesn't outrun duty of care.
What legal teams can do now
- Define roles and duties: distinguish "developer," "integrator," and "operator," and attach obligations to each stage of the supply chain.
- Adopt a foreseeability standard: negligent design, failure to warn, and failure to mitigate known misuse patterns should trigger liability.
- Use product liability analogies: strict liability for high-risk uses (e.g., medical, critical infrastructure, elections) and negligence for general use.
- Condition safe harbors on safeguards: documented risk assessments, incident reporting, audit logs, evaluations, and effective abuse response.
- Mandate recall/patch processes for models: versioning, rollback plans, kill switches for high-risk deployments, and user notification duties.
- Require insurance or bonding for high-scale systems to price risk and fund remediation.
- Enforce traceability: dataset and model provenance; content provenance (e.g., watermarking or cryptographic claims); clear version IDs.
- Clarify choice of law and venue: bind high-scale services to jurisdictions where they operate; prohibit liability arbitrage via fine print.
- Enable independent testing: lawful access for accredited red-teamers and researchers under responsible disclosure.
- Tie compliance to standards: align with recognized risk management practices; make conformity evidence admissible but not dispositive.
Regulatory levers worth using
- Duty to warn and update: require prominent, context-specific risk disclosures and timely mitigation when new hazards emerge.
- Reasonable monitoring for known harms: no blanket general monitoring, but clear duties once specific risks are identified.
- Market transparency: publish model cards, safety reports, and significant change logs; keep records for audit and discovery.
- Scale-based obligations: higher user reach or capability means higher duty of care and stiffer penalties.
- Remedies beyond fines: injunctive controls, deployment pauses, third-party audits, and binding corrective action plans.
Litigation playbook, in brief
- Plead negligent design, failure to warn, deceptive practices, and product liability where the model or service functions as a product.
- Target enterprise defendants with control over model behavior, safety settings, distribution, and monetization.
- Seek discovery on risk assessments, abuse reports, tuning data, gating controls, and executive knowledge of foreseeable harms.
- Request injunctive relief tied to specific controls: rate limits, feature gating, safety tuning, or withdrawal of hazardous versions.
The bottom line
Cyberspace was never outside the law. Section 230 was never meant to be a permanent escape hatch for trillion-dollar businesses. AI raises the stakes, so the exception must end.
Bring tech back under the same principle that governs every other tool: profit comes with duty. Accountability for foreseeable harms is the cleanest path to align innovation with society's interests-and to give courts, counsel, and companies a rule they can actually use.
Your membership also unlocks: