Who Holds the Power in AI? William & Mary Law School Looks Ahead to Democracy's Next 250 Years

At William & Mary Law, speakers urged: let people and law, not algorithms, decide AI's role. Move fast on policies, vendor terms, audits, and election rules to curb harms.

Categorized in: AI News Legal
Published on: Feb 04, 2026
Who Holds the Power in AI? William & Mary Law School Looks Ahead to Democracy's Next 250 Years

AI, Democracy, and Practical Next Steps: Notes from William & Mary Law

On Jan. 29, the College of William & Mary Law School convened a cross-disciplinary event on AI and the future of democracy. Backed by the Digital Democracy Lab, the Institute of Bill of Rights Law, the Election Law Society, the Data Privacy and Cybersecurity Legal Society, and the Military and Veterans Law Society, the goal was simple: examine what has worked for the United States and decide what should come next with AI.

Vice Provost for Research Alyson Wilson moderated a panel with Chiraag Bains, Tim Carroll, and Sunita Patel-voices spanning policy, research, and product security. The conversation stayed grounded, never losing sight of who should write the rules and how law can keep pace with technology.

Who holds the power?

Wilson set the tone: "AI systems are increasingly influencing what people see, what they buy, what they believe. And so, who truly holds power? Governments? Companies? The public? Algorithms?"

Carroll followed: "We are at this moment, right now, where nobody really understands where the power lies." For legal practitioners, that uncertainty translates to a clear mandate: clarify roles, codify accountability, and define jurisdiction before defaults set themselves.

Where should it sit?

Bains didn't hedge: "We the people should be the ones deciding and writing the rules about what impact AI has in our lives and the future of this nation." He pushed back on the fatalism that AI is beyond our control. Law is the mechanism that turns that posture into practice-rulemaking, enforcement, and review.

The policy lag is real

Carroll underscored the speed gap: new AI features can reach billions faster than policy can catch up. History can guide us, he noted, but we should also admit we're in uncharted waters and legislate with that humility.

For counsel, that means drafting adaptable standards, not brittle checklists. Think principles-based governance, risk tiers, and audit rights that survive product updates.

Immediate harms, near-term fixes

Audience questions skewed toward the downsides already here-deepfakes, manipulated media, non-consensual content. Bains stayed pragmatic: "Regulation could functionally outlaw non-consensual intimate imagery, synthetic CSAM… I think those are solvable problems."

There's momentum to work with. See the federal push under the AI Executive Order and the risk frameworks emerging from NIST, both useful anchors for firm policies and vendor requirements.

Cross-discipline buy-in matters

The room wasn't just lawyers and technologists. One attendee from the business school noted he uses AI regularly for research and consulting. That's the point-governance only works if it reflects how people actually work.

Practical takeaways for legal teams

  • Update vendor contracts: clear AI disclosures, data-use limits, audit rights, incident notice, model change logs.
  • Publish an internal AI use policy: approved tools, ban on uploading confidential or client data, review for export controls and privacy.
  • Stand up an AI review group: legal, security, compliance, and product. Short SLAs, risk tiers, documented decisions.
  • Track state and federal activity: deepfake labeling, election ads, non-consensual imagery, watermarking, and model accountability rules.
  • Align with NIST AI RMF: risk identification, testing, monitoring, and red-team requirements baked into procurement.
  • Prepare for litigation and discovery: data provenance, training data disputes, model bias evidence, and preservation protocols.
  • Plan for elections: guidance on synthetic media, takedown standards, and crisis comms for deceptive content incidents.

What this means for your practice

AI governance isn't a future problem. It's an everyday legal task that touches contracts, privacy, employment, advertising, and public law. The sooner your organization sets clear rules, the fewer messy exceptions you'll need to clean up later.

If you're building out capability, a simple cadence works: inventory use cases, assign risk levels, set controls, train, and audit. Then repeat. Small loops beat annual overhauls.

Further learning

For structured, job-specific training paths that help legal teams uplevel quickly, see this curated list of AI courses by role:

AI Courses by Job - Complete AI Training

Events like this one make a simple case: law should decide where AI sits in our institutions, not the other way around. Set the guardrails now, while you still can.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide