NVIDIA Positions Itself as Backbone of $4 Trillion AI Gold Rush
NVIDIA is no longer just a chipmaker. Its latest earnings suggest it has become the world’s most critical supplier of infrastructure for AI.
The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products. Sales of its latest Blackwell and Blackwell Ultra GPUs are ramping at unprecedented speed, with NVIDIA now producing 1,000 AI racks a week.
By 2030, NVIDIA expects trillions to be spent worldwide on AI infrastructure, from chips and data centres to software platforms and supercomputers. “Blackwell and Rubin AI factory platforms will be scaling into the $3 trillion to $4 trillion global AI factory build out through the end of the decade,” CEO Jensen Huang said on the earnings call.
Huang said the company expects revenue of $54 billion – plus or minus 2% – for the next quarter, excluding any shipments to China.
CFO Colette Kress added that sales to China could contribute between $2 billion and $5 billion.“There is interest in our H20s. We have received the initial set of licenses, and we have supply ready,” Kress said. “That is why we indicated that shipments this quarter could potentially fall in the range of $2 billion to $5 billion.”
Competitors in China
Huang estimated that China alone represents about a $50 billion opportunity for NVIDIA this year, if the company could compete there with its latest products. “And if it’s $50 billion this year, you’d expect it to grow 50% annually, just as the rest of the world’s AI market is growing,” he added.
At the same time, China is actively competing with NVIDIA in the GPU space primarily through domestic semiconductor companies like Huawei, Alibaba, Baidu, Cambricon Technologies and others who are developing GPUs for AI and gaming.
Cambricon on Thursday reported a record-breaking profit in the first half of the year, fueled by rising demand from companies like ByteDance for locally produced semiconductors as alternatives to NVIDIA chips. The Beijing-based firm announced on Tuesday that it earned Rmb1 billion ($140 million) in profit over the first six months, a sharp turnaround from a loss of Rmb533 million in the same period last year.
On the other hand, Huawei’s Ascend 910C, an evolution of Huawei’s 2019 Ascend 910, is engineered for AI inference tasks. Initially slated for mass production in mid-2025, the Ascend 910C’s rollout has been delayed due to technical challenges. Huawei now plans to commence production by the end of 2025.
Recent benchmarks indicate that it delivers approximately 60% of the performance of NVIDIA’s H100 GPU in inference workloads
Huawei is preparing to test its new AI chip, the Ascend 910D, in China, while it has also introduced the Ascend 920 AI chip.
According to DigiTimes Asia, the chip is slated to enter mass production in the second half of 2025. Industry experts believe the Ascend 920 could be a viable alternative to NVIDIA’s H20 GPUs.
The Rise of Sovereign AI
Beyond big tech and China, governments worldwide are racing to build sovereign AI infrastructure to reduce dependence on foreign cloud providers. NVIDIA has quickly become the go-to supplier.
The EU is investing €20 billion in 20 AI factories, while the UK’s Isambard-AI supercomputer, built with NVIDIA, is now the most powerful in the country. In the Middle East, sovereign wealth funds are pouring billions into AI data centres powered by NVIDIA’s hardware.
Kress said NVIDIA expects over $20 billion in sovereign AI revenue this year, more than double last year. In other words, nations are treating AI as a matter of national security, and NVIDIA is cashing in.
Chatbots to Reasoning Systems
The company’s biggest bet is that today’s generative AI systems are just the beginning. Huang argued that the next generation, so-called agentic or reasoning AI, will make current chatbots look primitive.
“Where chatbots used to be one-shots, you give it a prompt and it would generate the answer, now the AI does research,” Huang explained. “It thinks and does a plan, and it might use tools.”
This shift will require exponentially more computing power. “The amount of computation necessary… could be 100x, 1,000x and potentially even more,” Huang said.
That explosion in demand is why NVIDIA built its Blackwell NVLink 72 rack-scale system, a new architecture designed to link thousands of GPUs with extreme efficiency.
If AI systems are going to be 100x more compute-intensive, they’ll also be 100x hungrier for power. In Huang’s view, that turns energy efficiency into revenue efficiency.
“In a world of power-limited data centres, perf per watt drives directly to revenues,” he said. “The more you buy, the more you grow.”
Can Rivals Catch Up?
Huang shared that Blackwell’s data centre revenue had increased by 17% sequentially, reflecting its strong market adoption. Looking ahead, he discussed the upcoming Rubin architecture, expected to enter volume production in late 2025 and become available in early 2026. He revealed that six new Rubin chips are already in fabrication at TSMC.
Some rivals are betting on custom ASIC chips to challenge NVIDIA’s GPUs. But Huang dismissed the threat, arguing that accelerated computing is “the ultimate, most extreme computer science problem the world’s ever seen.”
What sets NVIDIA apart, he said, is its full-stack approach, including GPUs, CPUs, networking, and CUDA software, all designed to work seamlessly. That ecosystem, developed over decades, is why developers and enterprises keep flocking to NVIDIA.
The post NVIDIA Positions Itself as Backbone of $4 Trillion AI Gold Rush appeared first on Analytics India Magazine.




