Adaptable chips could solve AI's hardware problem
Moore's Law is breaking down. The semiconductor industry can no longer reliably double chip performance every two years, and the energy demands of AI systems are surging. Aman Arora, an assistant professor of computer science and engineering at Arizona State University, is working on a different approach: hardware that reconfigures itself for the task at hand.
His tool is the field-programmable gate array, or FPGA - a chip whose internal wiring can be reprogrammed after it leaves the factory.
Why GPUs aren't enough
The chips powering AI today were built for video games. Graphics processing units, or GPUs, were designed to render online worlds with convincing lighting and textures. They adapted well to training AI models in data centers, where the goal is processing massive batches of data quickly.
But AI is moving beyond data centers. Real-world applications - medical diagnostics, autonomous vehicles, edge devices - need instant responses. A single request must produce an answer as fast as possible, not process hundreds of requests in parallel.
That's where GPUs start to falter. They still rely on the traditional processor rhythm: fetch an instruction, decode it, execute it. They constantly shuttle data back and forth through memory. That overhead works fine at scale but wastes energy on edge devices where speed matters more than throughput.
How FPGAs sidestep the problem
"With an FPGA, there is no instruction decode, no instruction fetch happening," Arora said. "So no overhead."
Instead of following step-by-step instructions, the chip is configured to perform a specific task directly. Its internal connections reshape themselves into a circuit built exactly for that job. Load a new configuration, and the chip becomes something different without being replaced.
Arora's lab is using this flexibility in two directions. In one, they're building AI systems that run continuously on minimal power - glucose monitoring systems and quantum computing hardware that interprets delicate signals using machine learning. In the other, they're using AI itself to help design better FPGAs by narrowing millions of possible configurations down to the most efficient ones.
The economics shift
Companies like Microsoft already use FPGAs in production systems. The chips are standard in defense and space applications, where hardware can't be easily replaced and performance is critical.
Reconfigurable hardware changes the math of computing infrastructure. Instead of discarding chips every few years, the same hardware can be repurposed repeatedly. That reduces both energy consumption and manufacturing waste - increasingly important as AI infrastructure scales.
"Some technology companies are buying nuclear power plants to sustain the growth in AI," Arora said. "FPGAs are a much more energy-efficient alternative."
The shift reflects a broader principle: the most powerful computer may not be the one that does everything well. It may be the one that adapts to what matters most.
For IT and development professionals, understanding hardware-software co-design is becoming essential. AI for IT & Development covers these infrastructure considerations, while a learning path for software engineers digs into how AI systems optimize at the hardware level.
Your membership also unlocks: