AI laws ignore environmental damage, leaving regulatory gaps
More than 200 AI laws across 100 countries focus on privacy, bias and security. Few address the environmental toll of training and running AI systems.
AI development consumes vast amounts of energy and water. Manufacturing the graphics processing units (GPUs) that power AI models requires extraction of rare earth elements, which contaminates soil and water. Training a single large language model like GPT-3 consumed an estimated 700,000 litres of freshwater, according to 2025 research.
The problem extends beyond training. Energy consumption during actual use-generating text or images-far outweighs what's consumed during model development. As AI models grow larger and deployment spreads, overall energy use and emissions continue rising despite efficiency gains.
The EU Act sets a baseline, but stops short
The EU's AI Act, which took effect in August 2024, requires developers to disclose energy consumption data and develop AI systems "in a sustainable and environmentally friendly manner." But the requirement has a catch: companies only need to provide energy data when the EU's AI Office requests it.
Codes of conduct to assess and minimize environmental impact are optional, not mandatory. The Act prioritizes human-centric development over environmental safeguards.
The UK has no AI-specific environmental rules
The UK government's 2023 white paper on AI regulation explicitly excludes sustainability from its scope. The government acknowledged that addressing environmental risks was "outside of the scope" of its proposed framework, despite recognizing that AI can support climate solutions.
What transparency could achieve
Mandatory disclosure would create a foundation for real change. Developers should report energy and water consumption, carbon emissions, rare earth elements extracted, and plastic used during production.
With baseline data in place, governments could set enforceable targets for efficiency and emissions. Practical approaches already exist: training models on less carbon-intensive energy grids or in water-efficient data centres.
Consumer-facing measures could include:
- Energy labels for AI systems, modeled on the EU's appliance efficiency ratings (dark green for efficient, red for inefficient)
- Carbon and water usage warnings displayed per query
- "Energy Star" style ratings for AI products
- Social and environmental certification systems
Tax and funding incentives could encourage companies to choose sustainable approaches over cheaper alternatives.
The measurement problem
Accurate environmental accounting remains difficult. Tech companies withhold detailed information about their operations, making independent verification nearly impossible. Without transparency requirements, governments cannot set informed policy.
Integrating sustainability requirements into AI laws-through disclosure mandates, efficiency standards, and consumer labeling-is essential as AI deployment accelerates. The current regulatory gap leaves environmental costs unaddressed.
Your membership also unlocks: