For years, the biggest obstacle to scaling AI has been a simple lack of available Graphics Processing Units (GPUs). However, according to Microsoft CEO Satya Nadella, that era has quietly ended. Speaking on the Bg2 podcast with OpenAI CEO Sam Altman, Nadella revealed a surprising revelation: Microsoft’s supply chain is no longer experiencing a chip shortage.
The New Constraint: Non-Pluggable Accelerators
Nadella revealed that Microsoft has accelerators (GPUs) in stock that are literally unpowered. He noted that “there could be a whole bunch of chips sitting in inventory” waiting for data center space fully connected to a sufficient power grid. The bottleneck has shifted from securing Nvidia GPUs to finding “warm shells,” pre-built data centers close to sufficient grid capacity.

This shift marks a dramatic turning point for the industry. Just 12 to 24 months ago, GPU shortages and multiyear waiting lists dominated the headlines. Today, the limiting factors are local power grids, lengthy permitting processes, and the sheer volume of electricity required to run modern AI clusters at scale. Some new hyperscale facilities already consume as much energy as small cities, and the demand curve shows no signs of flattening.
This new reality is forcing cloud giants to adopt entirely new strategies: securing power purchase agreements spanning decades, exploring on-site power generation options, and even investing in small modular nuclear reactors (SMRs) to secure future capacity.
The race is no longer just about who can buy the most chips, but who can provide the megawatts needed to run them. Nadella’s message underscores a broader reality for the AI industry: the physical infrastructure of the digital age is reaching its limits. Now, investors, policymakers, and energy providers are as critical to new AI breakthroughs as chip designers.

