Who Governs AI? Silicon Sovereigns and the Hollowing of Public Authority

AI is sliding public authority to corporations-setting rules and moderating speech. Without real checks, a few companies end up setting the terms for democracy and markets.

Categorized in: AI News General Government
Published on: Feb 19, 2026
Who Governs AI? Silicon Sovereigns and the Hollowing of Public Authority

Silicon Sovereigns: How AI Is Shifting Authority From States to Firms

AI isn't just a new technology trend. It is moving core functions of public authority into private hands - setting rules, moderating speech, allocating attention and compute, and influencing markets and elections. The risk is less a sci-fi catastrophe and more a slow erosion of the state's ability to govern in the public interest.

The gains from AI will not be shared evenly. Without counterweights, the benefits accrue to a few firms and well-connected users, while costs - from job displacement to environmental strain - are socialized. The question is no longer whether AI will be governed. It's who gets to do the governing.

The real divide isn't East-West or North-South - it's public-private

Great-power competition frames AI like an arms race. North-South debates remind us that many people still lack electricity or internet access, and worry less about misuse than being left behind. Both matter - but the sharper split is between companies that set the rules of the digital economy and the governments trying to keep up.

Today's tech giants don't field armies or collect taxes, but their reach is comparable to historic trading empires. They write the platform rules, arbitrate disputes, and police speech at global scale. That's de facto sovereignty.

Governments are hesitating - and losing ground

States know AI touches growth, competitiveness, and national security. They also fear driving investment offshore or slowing innovation, while facing concentrated lobbying and deep voter reliance on these services. The result: fragmented, hesitant oversight.

China demonstrated that a determined state can rein in firms, though often by swapping private dominance for party control. The European Union passed the EU AI Act, but faces tough implementation and cost trade-offs. The U.S. remains patchy at the federal level, with attempts to preempt bolder state rules.

The leverage paradox

Users care about safety, fairness, and equity - but have little leverage. Companies have leverage - but little incentive to reduce profitable risks. Individual boycotts rarely move metrics in markets built on network effects and lock-in.

Collective action can help. Privacy movements already nudged product design and data practices. Similar pressure could push for more "responsible" or more "open" AI norms, if users, workers, researchers, and public institutions demand them together.

Transparency must include costs - not just features

Some firms - and even countries - now concede that scaling AI may derail climate targets. Disclosing the electricity and water costs of training and inference would make trade-offs visible and create competitive pressure to improve efficiency. See the IEA's analysis of data-centre energy use for context.

What governments can do now

  • Adopt a "too big to regulate" standard. If a company's size or integration prevents effective oversight, trigger structural separation, data/compute silos, or other remedies.
  • License high-risk training runs. Require reporting of compute, training data provenance, red-team plans, and safety benchmarks above defined thresholds.
  • Treat foundational AI infrastructure as critical. Apply siting approvals, resilience standards, and incident reporting to data centres, chip supply, and cloud concentration.
  • Use procurement as leverage. Mandate model cards, eval results, content policy commitments, and energy/water disclosures in public contracts; require indemnification for specified failures.
  • Mandate third-party audits. Independent evaluations for high-risk systems, with audit trails and secure researcher access via APIs and sandboxes.
  • Establish a duty of care and recall authority. Create clear liability for foreseeable harms and empower agencies to pause or recall unsafe models or features.
  • Protect workers and organized users. Recognize collective representation for gig workers and "user unions" in platform governance; require good-faith consultation on major product changes.
  • Competition enforcement with teeth. Target self-preferencing, exclusionary data deals, and compute-locking contracts; police interlocking directorates across AI, cloud, and chips.
  • Environmental guardrails. Set reporting standards for energy and water per training run and per inference; align tariffs and incentives to reward efficiency and off-peak usage.
  • Safeguard elections and speech. Enforce transparency for political content, provide qualified researcher access to study platform effects, and require provenance signals for AI-generated media where feasible.

International coordination - aim for minimums that matter

Unlike nuclear risk, AI offers no single shock to force cooperation. Start with practical minimums: shared incident taxonomies, compute/run registries above thresholds, compatible audit and disclosure standards, and rapid channels for cross-border enforcement. Perfection can wait; interoperability cannot.

What to watch in the next 12-24 months

  • Concentration metrics: Share of frontier compute, top-tier model access, and exclusive data partnerships.
  • Regulatory capacity: Budgets, technical hires, and audit labs inside key agencies.
  • Procurement precedents: Which jurisdictions set the toughest contract terms - and whether vendors accept them.
  • Environmental disclosures: Movement toward standardized, verified reporting of energy and water per model and feature.
  • Case law and enforcement: Outcomes of antitrust and consumer-protection actions that set boundaries for platform behavior.

Bottom line

The danger isn't that machines will rule people. It's that a narrow set of firms - through code, compute, and contracts - will rule the conditions under which people are governed. If states and public institutions don't reassert meaningful oversight now, they may not get a second chance.

If you work in or with government and need practical support, start here: AI for Government and the AI Learning Path for Policy Makers.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)