The appetite for high-performance processors from gpu vendors has reached unprecedented levels, with major technology companies dramatically escalating their infrastructure investments to compete in artificial intelligence. This trend became evident when Oracle founder Larry Ellison revealed during the company’s recent analyst briefing that the demand for chips has become so intense that even the world’s leading gpu vendors are struggling to keep pace with customer requests.
The Infrastructure Race Shows No Signs of Slowing
Oracle currently operates 85 data centers with an additional 77 under construction, yet Ellison envisions a future where the company could eventually run up to 2,000 facilities. The company’s unique architecture—featuring automated systems and proprietary RDMA networking technology—delivers significant cost advantages for AI development compared to traditional infrastructure providers. This efficiency advantage has generated massive demand from leading artificial intelligence startups and technology firms.
The numbers tell the story: Oracle generated $2.2 billion in cloud infrastructure revenue during its first fiscal quarter, representing 46% year-over-year growth. More impressively, the company closed out the quarter with a record $99 billion in remaining performance obligations—a 53% increase from the previous period. During the same quarter, Oracle signed 42 new GPU capacity deals valued at $3 billion, indicating that enterprise customers cannot get enough computing resources.
Enterprise Customers Are Begging for More Chips
Ellison recounted a dinner conversation with Tesla CEO Elon Musk and the CEO of the leading gpu vendors at a high-end restaurant in Palo Alto. Both Ellison and Musk reportedly implored the chip manufacturer’s leadership to accept more orders, emphasizing they needed substantially greater capacity than currently available. This anecdote underscores the acute shortage of premium processors in the current market.
The desperation for additional capacity is real across multiple fronts. Oracle plans to double its data center infrastructure spending from $6.9 billion in fiscal 2024 to approximately $13.8 billion in fiscal 2025. Tesla is simultaneously constructing a 50,000-GPU cluster by year-end, requiring a $10 billion investment. Microsoft spent $55.7 billion on capital expenditures primarily for AI infrastructure during fiscal 2024 and expects to invest even more going forward. Amazon’s capex spending is tracking toward $60 billion this calendar year.
These commitments mean the leading gpu vendors face a problem that most manufacturers would envy: insufficient supply to meet explosive demand.
What This Means for Processor Manufacturers
The gpu vendors supplying these massive deployments are capitalizing on unprecedented growth opportunities. During the second fiscal quarter of 2024, one major manufacturer generated $26.3 billion in data center revenue, primarily from processor sales—a 154% year-over-year increase. While the growth rate has moderated slightly due to the enormous scale involved, customer spending shows no sign of deceleration.
The next generation of processors promises even greater capabilities. Oracle intends to deploy clusters featuring 131,072 chips—substantially larger than its current maximum of around 32,000 units. These new configurations will incorporate the latest architecture designed for AI inference tasks at dramatically higher speeds than existing generations, enabling developers to construct increasingly sophisticated AI models.
Valuation Considerations for Investors
From a valuation perspective, the leading gpu vendors trading at a price-to-earnings multiple of 52.7 appears elevated compared to the broader technology sector’s 30.9 ratio. However, forward-looking metrics present a different picture. Wall Street analysts project earnings per share of $4.02 for fiscal 2026, suggesting a forward P/E ratio of approximately 28.8.
For investors with a time horizon extending through the next 18 months, current prices may represent attractive entry points—assuming analyst forecasts materialize as expected. The commitments from major enterprises suggest the demand environment will remain robust in the near term, with little indication of an imminent slowdown in infrastructure spending or gpu vendors’ order pipelines.
Long-term competitive dynamics will eventually matter as other gpu vendors continue developing competitive offerings. However, based on current enterprise spending trajectories and order backlogs, the semiconductor supply chain supporting AI infrastructure shows every sign of remaining constrained for the foreseeable future.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Nvidia's GPU Dominance Gets Stronger Amid Record Enterprise Demand From Tech Giants
The appetite for high-performance processors from gpu vendors has reached unprecedented levels, with major technology companies dramatically escalating their infrastructure investments to compete in artificial intelligence. This trend became evident when Oracle founder Larry Ellison revealed during the company’s recent analyst briefing that the demand for chips has become so intense that even the world’s leading gpu vendors are struggling to keep pace with customer requests.
The Infrastructure Race Shows No Signs of Slowing
Oracle currently operates 85 data centers with an additional 77 under construction, yet Ellison envisions a future where the company could eventually run up to 2,000 facilities. The company’s unique architecture—featuring automated systems and proprietary RDMA networking technology—delivers significant cost advantages for AI development compared to traditional infrastructure providers. This efficiency advantage has generated massive demand from leading artificial intelligence startups and technology firms.
The numbers tell the story: Oracle generated $2.2 billion in cloud infrastructure revenue during its first fiscal quarter, representing 46% year-over-year growth. More impressively, the company closed out the quarter with a record $99 billion in remaining performance obligations—a 53% increase from the previous period. During the same quarter, Oracle signed 42 new GPU capacity deals valued at $3 billion, indicating that enterprise customers cannot get enough computing resources.
Enterprise Customers Are Begging for More Chips
Ellison recounted a dinner conversation with Tesla CEO Elon Musk and the CEO of the leading gpu vendors at a high-end restaurant in Palo Alto. Both Ellison and Musk reportedly implored the chip manufacturer’s leadership to accept more orders, emphasizing they needed substantially greater capacity than currently available. This anecdote underscores the acute shortage of premium processors in the current market.
The desperation for additional capacity is real across multiple fronts. Oracle plans to double its data center infrastructure spending from $6.9 billion in fiscal 2024 to approximately $13.8 billion in fiscal 2025. Tesla is simultaneously constructing a 50,000-GPU cluster by year-end, requiring a $10 billion investment. Microsoft spent $55.7 billion on capital expenditures primarily for AI infrastructure during fiscal 2024 and expects to invest even more going forward. Amazon’s capex spending is tracking toward $60 billion this calendar year.
These commitments mean the leading gpu vendors face a problem that most manufacturers would envy: insufficient supply to meet explosive demand.
What This Means for Processor Manufacturers
The gpu vendors supplying these massive deployments are capitalizing on unprecedented growth opportunities. During the second fiscal quarter of 2024, one major manufacturer generated $26.3 billion in data center revenue, primarily from processor sales—a 154% year-over-year increase. While the growth rate has moderated slightly due to the enormous scale involved, customer spending shows no sign of deceleration.
The next generation of processors promises even greater capabilities. Oracle intends to deploy clusters featuring 131,072 chips—substantially larger than its current maximum of around 32,000 units. These new configurations will incorporate the latest architecture designed for AI inference tasks at dramatically higher speeds than existing generations, enabling developers to construct increasingly sophisticated AI models.
Valuation Considerations for Investors
From a valuation perspective, the leading gpu vendors trading at a price-to-earnings multiple of 52.7 appears elevated compared to the broader technology sector’s 30.9 ratio. However, forward-looking metrics present a different picture. Wall Street analysts project earnings per share of $4.02 for fiscal 2026, suggesting a forward P/E ratio of approximately 28.8.
For investors with a time horizon extending through the next 18 months, current prices may represent attractive entry points—assuming analyst forecasts materialize as expected. The commitments from major enterprises suggest the demand environment will remain robust in the near term, with little indication of an imminent slowdown in infrastructure spending or gpu vendors’ order pipelines.
Long-term competitive dynamics will eventually matter as other gpu vendors continue developing competitive offerings. However, based on current enterprise spending trajectories and order backlogs, the semiconductor supply chain supporting AI infrastructure shows every sign of remaining constrained for the foreseeable future.