Ai arms race: Hyperscalers’ $305 billion 2025 capex and Nvidia’s push for more GPUs
Top cloud providers funneled $305 billion into capital expenditures in 2025, and that build-out — which is set to increase in 2026 — has tightened supply for the GPUs at the center of modern ai computing. The spending surge has translated into pronounced revenue growth for Nvidia and fresh pressure on chip supply chains.
Amazon, Microsoft and Alphabet's Google: $305 billion in 2025 capex
Amazon, Microsoft and Alphabet's Google accounted for the bulk of hyperscaler capital spending in 2025, together totaling $305 billion. That outlay is expected to expand significantly in 2026, and roughly half of data-center budgets are directed to chips and computing systems — a channel that has prompted the world’s biggest technology companies to compete directly for Nvidia’s GPUs, which have become the benchmark chip for AI workloads.
Nvidia data center revenue: $51 billion, 66% growth and 89% of business
Nvidia’s data-center business grew 66% year over year in its fiscal third quarter last year, reaching $51 billion and accounting for 89% of the company’s total business. Analysts project Nvidia’s total revenue will rise 67% year over year in the fiscal fourth quarter. CEO Jensen Huang has framed the shift as a structural change — "We've entered the virtuous cycle of AI" — with more companies building AI models and agents and thereby increasing demand for chips and compute capacity in data centers.
OpenAI deal: 10 gigawatts and eventual use of millions of GPUs
Nvidia struck a deal to deploy at least 10 gigawatts of AI data-center capacity last year with OpenAI, an organization that serves over 800 million ChatGPT users. That capacity is intended to support OpenAI’s eventual use of millions of GPUs as large-scale model deployment proceeds, and it illustrates how hyperscaler and AI developer commitments feed back into demand for Nvidia’s technology.
Ai performance: Rubin chips versus the Blackwell generation
Top cloud providers are routinely prioritized for Nvidia’s new chip generations each year, and Nvidia’s upcoming Rubin chips are positioned to deliver better AI performance than the previous Blackwell generation. That pace of chip innovation is prompting further investment in capacity so hyperscalers can deploy the most powerful systems available. At the same time, many hyperscalers are experimenting with customized chips to reduce costs, intensifying competition even as general-purpose GPUs remain indispensable for a wide range of AI applications.
Valuation, profit and market signals: P/E ~24, $99 billion profit and analyst positions
Nvidia’s forward price‑to‑earnings ratio sits at roughly 24, while analysts expect 57% earnings growth this year and 37% annualized over the next few years. The company earned $99 billion in profit over the last four quarters, reflecting a 53% margin on revenue. Despite those metrics, some investment services did not include Nvidia among their current top 10 stock picks.
Market commentary notes that NVDA’s share price has been trading at levels similar to early December when a prior bullish piece on the company was published. The author of that commentary, Bohdan Kucheriavyi, disclosed a beneficial long position in Nvidia through stock, options or other derivatives, stated that the piece expressed his personal opinions and that he was not receiving compensation beyond the platform’s usual arrangements. The public disclosure further emphasized that he is not a licensed financial advisor, that investing carries risks including loss of principal, and that past performance is no guarantee of future results; it also clarified that third‑party analysts’ views may not reflect the platform as a whole and that the platform is not a licensed securities dealer or US investment adviser.
One separate research note described a small company as an "Indispensable Monopoly" that provides critical technology used by both Nvidia and Intel. The broader implication is that multiple layers of the AI supply chain are drawing investor attention as hyperscalers expand capacity.
What makes this notable is the clear cause-and-effect loop: rising hyperscaler capex and the half of those budgets devoted to chips drive demand for GPUs, which in turn fuels Nvidia’s data-center revenue and profitability, even as customized chip strategies by cloud providers introduce competitive pressures.
An image credit line in one of the analyses referenced BING-JHEN HONG/iStock Editorial Getty Images.