CMU Researchers Cut Energy Demands From AI Data Centers
Researchers at Carnegie Mellon University are developing hardware and software designs that reduce the energy consumption of data centers powering artificial intelligence systems. Three separate projects tackle the problem from different angles: more efficient server designs, a new processor chip architecture, and scheduling data center work during off-peak hours.
The work addresses a concrete problem. AI data centers are straining the U.S. energy grid, with electricity demands projected to double or triple within the next few years. The U.S. Department of Energy estimates that data center energy use could represent as much as 12% of total U.S. electricity consumption by 2028. Rising energy costs are already driving up utility bills for Americans across the country.
Reusing and Redesigning Servers
Akshitha Sriraman, an assistant professor of electrical and computer engineering at CMU, leads a team designing servers that consume less energy over their full lifecycle. The approach combines older and newer technology by reusing existing server components while incorporating more efficient new parts.
"We call them 'carbon-efficient servers,'" Sriraman said. "Our new hardware design is able to run things more efficiently and it's also more sustainable."
The strategy involves a tradeoff. Reusing older components may be less energy efficient per operation, but keeping them in use longer avoids the energy cost of manufacturing replacements.
Microsoft is already exploring adoption of Sriraman's designs for both internal systems and public cloud customers. The company is considering the technology as part of its strategy to meet its 2030 decarbonization targets.
Widespread adoption by major cloud companies could eliminate roughly 100 million of the 2.5 billion metric tons of carbon emissions projected from the cloud sector by 2030 - equivalent to the annual emissions of entire countries like Qatar or Venezuela.
A New Processor Architecture
Brandon Lucia and Nathan Beckmann, both CMU professors, created Efficient Computer, a company developing a fundamentally different type of processor chip. Traditional processors repeatedly fetch instructions from memory and move data around the chip, consuming significant energy during billions of operations per second.
Efficient Computer's new architecture eliminates the need to fetch instructions cycle by cycle and improves data flow within the chip. "We eliminate a huge energy sink," Lucia said.
The company claims its processor is 10 times more energy efficient than the best low-power general purpose computers currently available. A device powered by battery would run for years instead of weeks on the same charge.
Efficient Computer recently announced $60 million in new funding to expand the work.
Running Data Centers Overnight
Peter Zhang, an assistant professor of operations research at CMU's Heinz College, is exploring whether data centers could shift their workloads to overnight hours when electricity demand is lower. The approach could reduce strain on the aging U.S. energy grid.
Zhang's research examines whether dynamic workload adjustments would stabilize electricity demand and what incentives would encourage data centers to operate on a nocturnal schedule. His proposal won the inaugural AI & Energy seed grant from CMU's Scott Institute for Energy Innovation and Block Center for Technology and Society.
For professionals working in AI for IT & Development and AI for Science & Research, these developments represent practical solutions to infrastructure constraints that affect system design and deployment decisions.
Your membership also unlocks: