The $900 Billion Google Business NVIDIA Fears
Google is stepping up its push into the AI hardware business, taking aim at NVIDIA with its proprietary Tensor Processing Units (TPUs). The company has begun courting smaller cloud providers such as Fluidstack, Crusoe and CoreWeave to host TPUs, offering an alternative to NVIDIA’s dominant chips.
According to D.A. Davidson analysts, cited by MarketWatch, combining Google’s TPU business with its DeepMind AI research unit could be valued at around $900 billion.
The sixth-generation Trillium TPUs are already in demand, and the upcoming seventh-gen Ironwood TPUs, built for large-scale inference, are expected to drive further interest.
Google’s TPU Business Could Rival Search in Value
Dylan Patel, chief analyst at SemiAnalysis, an independent research and analysis firm, believes Google could unlock massive value if it decided to sell its TPUs beyond its own cloud.
In a recent YouTube podcast, Patel pointed out that demand for custom silicon from companies like Amazon, Google and Meta has surged. While Amazon is still figuring out how to fully scale its Tranium chips, Patel said, “I totally think Google should sell TPUs externally, not just renting, but physically.”
According to him, such a move could even rival the market value of Google’s search businesses, given the growing adoption of open-source models and the falling cost of AI deployment.
“It’s kind of funny if a side hobby, in theory, has a higher company value potential than your entire business,” he said, adding that internally, Google has discussed the idea, but it would require “a big reorg of culture” across its TPU and software teams.
Sunil Gupta, CEO of Yotta, a cloud services company, in an exclusive interview with AIM, said he is excited about the prospect of Google selling TPUs and would be open to hosting them in Yotta’s data centres if the opportunity arises.
Meanwhile, as part of the third tender under the IndiaAI Mission’s compute infrastructure expansion, 1,050 Google Trillium TPUs were added to the national cluster, marking their first inclusion alongside thousands of GPUs.
Reliance recently unveiled its new venture, Reliance Intelligence, which will use Google Cloud infrastructure running on TPUs.“Google and Reliance are partnering to transform all of Reliance’s businesses with AI—from energy and retail to telecom and financial services. To enable this adoption, we are creating a dedicated cloud region for Reliance, bringing Google Cloud’s AI and compute, powered by Reliance’s clean energy and connected by Jio’s advanced network,” said Sundar Pichai, CEO of Google.
Developer activity around TPUs on Google Cloud grew by 96% in just six months, indicating the growing momentum among engineers and researchers outside Google.
Stiff Competition
The sixth-generation Trillium units are already in high demand, while the upcoming seventh-generation Ironwood TPUs are expected to attract even more interest as the first designed specifically for large-scale inference, the stage where AI models are deployed after training.
“Ironwood is comparable to Blackwell GPUs from NVIDIA. The Edge TPUs that Google is selling will face stiff competition from newer generation CPUs, and for inferencing, they will be up against players like Sambanova and Cerebras,” A S Rajgopal, MD and CEO of NxtGen Cloud Technologies, a specialised cloud platform for financial services, healthcare and government, told AIM.
He added that at the edge, Google TPUs will compete with regular CPUs from AMD and Intel. “We are running the OSS (20B) model on a 128-core AMD CPU with strong results. Today, newer CPUs can already handle models of up to 10B parameters, and they are progressively integrating more AI capabilities that will compete with Google’s Edge TPUs,” he said.
Globally, OpenAI has begun using Google’s TPUs for AI inference to reduce costs compared with NVIDIA GPUs. The company, which had primarily rented NVIDIA hardware from Microsoft and Oracle datacentres, has now added Google’s Tensor Processing Units to its infrastructure.
Last year, Apple also revealed that it employed Google’s cloud-based TPU clusters, specifically the v4 and v5p versions, to train its Apple Foundation Model (AFM).
Some in the developer community believe Google should go all in on TPUs rather than continuing to support NVIDIA hardware. As one developer put it on X, “Google should just go all the way and stop supporting NVIDIA with JAX—just start selling TPUs to people. Full TPU, full JIT heaven.”
Sasank Chilamkurthy, the founder of Qure AI and Von Neumann AI, however, told AIM that TPUs remain a “niche but the best alternative to NVIDIA” at scale, even though their adoption outside Google’s ecosystem faces challenges.
He noted that TPUs are tightly integrated with Google’s own software stack—particularly JAX, which has largely replaced TensorFlow—but support for PyTorch, the industry’s favourite, remains weak. “PyTorch doesn’t really work well with TPUs. JAX is the way to go if you use TPUs, but it is still very Google-centric,” he explained.
While he sees TPUs as competitive on performance, Chilamkurthy believes the bigger challenge lies in commercialisation. “For Google, selling TPUs externally is a sales problem—they would need to build customer support teams, software integrations, and a broader ecosystem,” he said, pointing out that Google’s previous attempt with Edge TPUs fizzled out for similar reasons.
Echoing similar thoughts, Ankush Sabharwal, founder of Corover.ai, a Bengaluru-based conversational Gen AI platform, told AIM that Google’s Trillium TPUs are apparently great for scalable and cost-effective training in Google Cloud, especially with JAX and TensorFlow. “NVIDIA’s H100 and Blackwell GPUs offer flexibility and work efficiently well with PyTorch and CUDA,” he added.
Sabharwal added that GPUs would be the better choice for projects requiring diverse framework support.
Rajgopal explained that Google’s Edge TPUs are designed for inferencing on local, compact, and low-power devices, making them suitable for embedding into consumer appliances, surveillance systems, and even cars.
He noted that these TPUs can run existing or pre-trained models very efficiently, which helps bring down costs and opens opportunities for startups to build new applications. “Many companies are also working on providing options for running inferencing in the cloud, such as SambaNova and Cerebras,” he added.
On potential customers, Chilamkurthy was sceptical about startups or smaller players, suggesting instead that large enterprises and big tech firms could be early adopters. “Startups don’t buy GPUs, they just use APIs. If anyone bites, it will be enterprises like Meta, who control their entire stack,” he said.
The post The $900 Billion Google Business NVIDIA Fears appeared first on Analytics India Magazine.




