Global AI Governance, Simplified: A Three-Layer Framework for Public Officials
"If the 20th century ran on oil and steel, the 21st runs on compute and the minerals that feed it." That logic now drives policy. The United States and eight partners launched Pax Silica to secure the tech supply chain. Days earlier, the Linux Foundation formed the Agentic AI Foundation (AAIF) with major AI firms to push shared tools and standards for agentic systems. Helpful moves-yet they add to a crowded policy ecosystem.
Since 2024, the Council of Europe finalized a Convention on AI and a human rights risk assessment method. The OECD updated trustworthy AI principles and released guidance for AI use in government. The AI Action Summit issued a statement on inclusive and sustainable AI. The United Nations set up an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. Volume is high. Overlap is real. And it's hard for government teams to decide where to engage for impact.
The Three Layers at a Glance
This framework adapts a well-known internet governance model-three layers that keep the picture clear and operational.
- Infrastructure layer: Compute and data plumbing-semiconductors, GPUs/TPUs/NPUs, data centers, cooling, cloud platforms, and the energy systems that feed them (water, gas, nuclear). Think Nvidia, TSMC, hyperscalers, and critical minerals supply chains.
- Logical layer: Models, software, and orchestration-foundation models (often proprietary), open-source frameworks (PyTorch, TensorFlow), exchange formats (ONNX), and emerging interoperability protocols (e.g., the Model Context Protocol). This is where model access, safety tooling, and evaluations live.
- Social layer: Where people, agencies, and firms use AI-applications and agents across hiring, crime prevention, marketing, service delivery, and workflow automation. Examples include Gemini, ChatGPT, Veo, Napkin, Canva, Base44, and Genspark.
Why this lens helps policy work
- Scope clarity: California's Transparency in Frontier AI Act and the Hiroshima Process Code of Conduct focus mostly on the logical layer. The EU AI Act addresses high-risk uses at the social layer and adds transparency rules for foundation models at the logical layer. The Council of Europe's Convention leans into public-sector use.
- Interoperability by design: It's easier to align standards and oversight when you know which layer you're touching-and what that breaks or improves elsewhere.
- Realistic engagement: No single forum (including the UN) can cover all three layers with speed and precision. This model helps place the right issues in the right venues.
Cross-Layer Verticals You Can't Ignore
- Agents: Planning, reasoning, and tool-use (logical) paired with autonomous actions in apps and workflows (social).
- Data: Social-layer activity creates datasets that tune models in the logical layer, which then steer outcomes back at the social layer. Cloud storage sits at infrastructure, but policies ripple across all three.
- Compute policy: Regulating compute at infrastructure changes developer behavior at the logical layer and deployment choices at the social layer.
Bottom line: Policy in one layer moves the other two. Treat some topics (agents, data, compute) as verticals that cut across the stack.
Who Does What (and Where the Friction Starts)
Multiple bodies are active across layers: the OECD, Council of Europe, UN forums, and new industry consortia like AAIF. Alliances and partnerships blur lines-Nvidia and Google on hardware, Google providing Anthropic with TPU access, and major cloud providers hosting their own models while offering applications. Large firms now pursue the full stack-compute, models, and apps-while locking in long-term energy deals (Microsoft in 2024 for nuclear power; Meta in early 2026). Coordination is necessary, overreach is risky.
Two helpful anchors for officials: OECD AI Principles and the Council of Europe Convention on AI.
Practical Steps for Government Teams
Policy levers by layer
- Infrastructure
- Track and secure critical minerals and equipment (align with Pax Silica goals). Use trade, export controls, and incentives to reduce single-point dependencies.
- Monitor compute capacity and energy footprints. Tie public incentives to safety, security, and environmental performance (cooling, water, emissions).
- Use procurement to require uptime, security, incident reporting, and continuity plans for data centers and cloud services.
- Logical
- Set disclosure and testing expectations for model providers (capabilities, evaluations, known limitations, red-team results).
- Back open standards and safe interoperability protocols. Encourage secure model APIs and model cards as baselines.
- Support open-source components with secure development requirements and maintained reference implementations.
- Social
- Apply risk-based rules for actual use: audits, human oversight, contestability, and clear logs for high-impact public decisions.
- Issue sector guidance (hiring, policing, health, education) that ties outcomes back to data quality and model constraints.
- Use public procurement to demand compliance by default-evaluation, accessibility, privacy, and equity checks baked into contracts.
Guardrails for Multistakeholder Governance
- Keep mandates tight. Avoid mission creep and overlapping remits that slow delivery.
- Prefer agile, data-driven instruments over sprawling frameworks. Pilot, measure, iterate.
- Aim for interoperability across jurisdictions. Let coalitions of like-minded countries move first, then share what works.
- Use the UN selectively for convening and consensus notes, not for detailed technical rulemaking.
Policy Questions to Prioritize
- Which layer are we regulating-and what second-order effects should we expect in the other layers?
- How do we align compute oversight with privacy, competition, and trade commitments?
- What evaluation standards travel well across borders for both proprietary and open-weight models?
- How do we govern agents that can take autonomous actions across multiple systems?
- Where should public funding go: energy efficiency, safety tooling, open standards, or targeted R&D?
What the Next Layer Might Add
- Below infrastructure: Materials and minerals (silicon, gallium compounds, rare earth elements like dysprosium). Sourcing raises geopolitical questions.
- Off-planet infrastructure: Space-based data centers are on the table (e.g., Nvidia's Starcloud concept), which pulls space law into AI policy.
- Physical AI systems: Autonomous vehicles, drones, robots, wearables, and smart-city deployments may merit a parallel layer to cover embodied risk.
- Foundation models: Some will treat them as a distinct layer, given their outsized influence.
- Beyond AI: The same three layers can frame quantum and neurotech-with the human body possibly viewed as a foundational layer for brain-machine interfaces.
90-Day Starter Plan for Public Agencies
- Map your portfolio to the three layers. Note overlap with other ministries and where coordination is missing.
- Stand up a minimal evaluation protocol: model cards required for vendors; basic red-teaming for high-impact use cases.
- Draft procurement clauses for AI systems: logging, human oversight, data retention, and incident reporting.
- Pick one cross-layer vertical (agents or data) and run a focused policy sprint with industry and civil society.
- Join or observe two standards efforts relevant to your remit (interoperability, safety evaluations, or identity/traceability).
Upskilling your team
If your department needs focused training on AI capabilities, risks, and procurement, explore role-based options here: AI courses by job.
Closing thought
The three-layer frame won't solve every policy problem, but it reduces noise. Use it to place issues, pick levers, and sequence action. As the ecosystem changes, adjust the model-then keep going.
Your membership also unlocks: