On Thursday, Microsoft CEO Satya Nadella shared a video on Twitter showcasing the company’s inaugural large-scale AI system—referred to as an AI “factory” in Nvidia’s terminology. He stated that this marks the beginning, with many more Nvidia AI factories set to be launched in Microsoft Azure’s worldwide data centers to support OpenAI operations.
Each of these setups consists of a network of over 4,600 Nvidia GB300 rack servers, each equipped with the highly sought-after Blackwell Ultra GPU, all interconnected using Nvidia’s high-speed InfiniBand networking technology. (In addition to dominating the AI chip market, Nvidia CEO Jensen Huang strategically secured control over InfiniBand by acquiring Mellanox for $6.9 billion in 2019.)
Microsoft has committed to rolling out “hundreds of thousands of Blackwell Ultra GPUs” as it expands these AI systems globally. The scale of these installations is impressive (and the company has released extensive technical information for those interested in the hardware), but the timing of this news is also significant.
This announcement follows closely after OpenAI—Microsoft’s partner and sometimes rival—secured two major data center agreements with Nvidia and AMD. By 2025, OpenAI is projected to have amassed commitments totaling around $1 trillion to develop its own data centers, according to some estimates. CEO Sam Altman also indicated this week that additional projects are on the horizon.
Microsoft is making it clear that it already operates more than 300 data centers across 34 nations, and claims these facilities are “uniquely positioned” to “address the needs of cutting-edge AI today.” The company also noted that these powerful AI systems are designed to handle future models with “hundreds of trillions of parameters.”
We anticipate further updates on Microsoft’s efforts to scale up AI infrastructure later this month. Microsoft CTO Kevin Scott is scheduled to speak at TechCrunch Disrupt, taking place from October 27 to 29 in San Francisco.