Landmark development or blunder for Andhra? The reality of AI data centres
Andhra Pradesh is set to host a massive AI data centre complex led by Google in Visakhapatnam and Anakapalli. The pitch: "landmark development," investment, and jobs. The reality: AI data centres are power-hungry, water-intensive, and highly automated. That means significant grid and water stress, limited long-term local jobs, and tricky claims about clean energy.
What's actually being built
AI data centres are not typical server farms. They require high-density compute, specialised networking (think InfiniBand and ultra-low-latency fabrics), custom power distribution, and heavy-duty cooling. The heat is extreme, the energy draw is massive, and cooling can consume large volumes of water in hot climates.
Critics note that recent corporate sustainability statements have walked back earlier simplicity around carbon "neutrality," and even Big Tech now concedes that hitting 2030 climate targets is hard. That matters if the project's pitch relies on broad "100% renewable" claims without hourly matching or grid reality checks.
The activism wave has arrived
Data centre resistance is common in the US and Europe. Communities push back on water, electricity, land use, tax breaks, and low job density. Chile saw a major project paused over water concerns. Portugal's Sines 4.0 controversy even toppled a government. Andhra just got its first taste of similar activism, led by the Human Rights Forum (HRF).
The energy and water calculus
- Energy: A 1 GW campus is city-scale power demand. Even with renewables, datacentres often rely on fossil backup during peaks or lulls. Annual "100% renewable" claims usually mean offsets or annual matching, not 24/7 hourly clean supply.
- Cooling: In hot, humid climates, water usage can spike with evaporative systems. Switching to closed-loop or seawater cooling reduces freshwater use but raises cost and complexity. Without careful design, groundwater tables drop and discharge risks rise.
- Grid impact: Large, spiky loads stress transmission, raise balancing costs, and can crowd out other consumers unless demand response and on-site storage are baked in.
The job story: construction surge, then a cliff
During build-out, thousands of workers will be needed for civil works, power, networking, and mechanical systems. Once live, headcount collapses. Hyperscale facilities run with small, specialised teams: facilities engineers, network engineers, SRE/automation, security, and vendor field techs. Most roles are highly skilled and often filled by external hires.
The "massive jobs" narrative usually refers to the construction phase and upstream supply chains, not the data centre's steady-state operations.
Risk profile for Andhra Pradesh
- Water stress: Visakhapatnam already faces erratic rainfall and groundwater depletion. Large evaporative cooling loads intensify local scarcity unless the design prioritises non-potable sources and high-efficiency, low-water systems.
- Emissions: Without 24/7 carbon-free energy and firm clean supply, fossil peakers fill the gap. That undermines "green" claims and raises Scope 2 emissions.
- Waste and discharge: Chemicals from water treatment and equipment maintenance require strict monitoring and transparent reporting.
- Public value vs incentives: Reports of significant fiscal incentives (land, tax breaks, discounts, reimbursements) demand a transparent cost-benefit analysis: long-term jobs, local linkages, grid upgrades, and environmental safeguards.
For IT and development teams: what this means for your work
- Architect for efficiency: Model distillation, quantisation (INT8/FP8), sparsity, LoRA, efficient attention, and caching reduce training/inference load - and therefore energy and cooling needs.
- Smarter scheduling: Carbon-aware and water-aware schedulers (hourly clean energy matching, workload shifting, preemption, demand response) can materially cut emissions and grid strain.
- Infra choices matter: Right-size GPUs/accelerators, embrace mixed precision, use compiled runtimes (TensorRT, XLA), and pipeline parallelism. Avoid overprovisioning clusters "just in case."
- Observability for sustainability: Track PUE, WUE, carbon intensity per workload, and cooling setpoints. Expose metrics to product teams so efficiency affects roadmaps, not just facilities.
- Edge and hybrid patterns: Push inference to edge where latency allows; use regional clusters with cleaner grids; batch non-urgent jobs to align with cleaner hours.
What good looks like for hyperscale builds
- Water: Prioritise non-potable sources, wastewater reuse, closed-loop or liquid cooling, and strict discharge controls. Publicly report WUE monthly.
- Energy: 24/7 carbon-free procurement (hourly), grid-interactive operations, on-site storage, and firm clean PPAs. Publish location-based emissions, not just market-based claims.
- Grid upgrades: Co-fund transmission and distribution improvements and share benefits with local consumers.
- Heat reuse: District heating or industrial reuse where viable.
- Transparency: Open data on water use, energy, emissions, waste, and incident reports. Independent audits. Community benefit agreements.
- Jobs and skills: Fund local training pipelines for facilities, electrical, network, and SRE roles. Tie incentives to local hiring targets.
Questions policy teams and buyers should ask before cheering a deal
- Is "100% renewable" hourly-matched and local, or annual offsets on a distant grid?
- What is the projected WUE and source of cooling water across seasons and heatwaves?
- How many permanent roles, with what skill profiles, and what share will be local?
- Who pays for grid upgrades and who benefits from them?
- What are the binding caps and reporting on water draw, discharge quality, and emissions?
- What happens to incentives if targets are missed?
Bottom line
AI data centres can enable growth, but they are infrastructure plays with heavy externalities. Without hard guarantees on water, energy, transparency, and real local benefits, the "landmark" label is marketing. The tech is impressive; the public value depends on rigorous design, honest accounting, and enforceable commitments.
Next steps for practitioners
- Push for efficiency by default in your AI stack: smaller models, better compilers, and carbon-aware scheduling.
- Ask providers for hourly carbon and water disclosures tied to your workloads. If they can't provide them, escalate.
- If you're building or operating these systems, upskill in AI automation to cut waste and cost. Practical AI automation certification can accelerate that shift.
Your membership also unlocks: