OpenAI taps Google AI chips in surprising shift from Nvidia and Microsoft
OpenAI now rents Google's AI chips to support ChatGPT, marking a shift from Nvidia GPUs. This move diversifies its hardware sources and may reduce costs.

OpenAI Now Uses Google's AI Chips to Power ChatGPT and More
OpenAI has recently begun renting Google's AI chips to support ChatGPT and its other products, according to a source familiar with the matter. This marks a notable shift in OpenAI's hardware strategy, which previously relied heavily on Nvidia's GPUs.
Traditionally, OpenAI has been one of the largest buyers of Nvidia graphics processing units (GPUs). These AI chips are used not only to train models but also for inference computing—the stage where AI applies trained knowledge to make predictions or decisions based on new data.
A New Collaboration Between Competitors
Earlier this month, Reuters reported that OpenAI planned to add Google Cloud services to meet its growing computing demands. This move surprised many, given the competitive nature of both companies in the AI space.
For Google, this deal is part of a broader effort to make its in-house tensor processing units (TPUs) more accessible externally. Historically, these TPUs were reserved for Google's internal use. However, expanding access has attracted major clients like Apple, as well as startups such as Anthropic and Safe Superintelligence—both founded by former OpenAI leaders.
Why OpenAI Is Turning to TPUs
- OpenAI's decision to rent Google's TPUs represents the first significant use of non-Nvidia chips for its AI workloads.
- This shift reduces OpenAI's reliance on Microsoft's data centers, which have been a key infrastructure partner until now.
- TPUs may offer a more cost-effective alternative to Nvidia GPUs, especially for inference tasks, helping OpenAI lower operational expenses.
However, the most powerful TPUs are reportedly not being rented to OpenAI, according to sources cited by The Information. Google remains cautious about providing its strongest hardware to a key competitor in the AI race.
What This Means for the AI Industry
This collaboration highlights Google's success in leveraging its proprietary AI hardware and cloud platform to attract significant customers. It also signals a subtle shift in AI infrastructure dynamics, with OpenAI diversifying its hardware suppliers beyond Nvidia and Microsoft.
For IT professionals and developers, this development suggests a growing variety of AI computing options in the market. As costs and performance vary between GPUs and TPUs, choosing the right hardware for AI workloads will become increasingly strategic.
Those interested in deepening their AI skills or exploring practical AI deployments might find it useful to explore specialized training. Resources like Complete AI Training's latest AI courses offer hands-on guidance on working with leading AI technologies and platforms.
OpenAI and Google have not publicly commented on the details of this arrangement. Still, the move underscores how cloud providers and AI developers are adapting to meet the demands of large-scale artificial intelligence applications.