Microsoft unveiled the newest Fairwater AI datacenter in Atlanta, part of a connected system with the Wisconsin Fairwater site and the wider Azure network. Together, they form what Microsoft calls the world’s first “planet-scale AI superfactory,” designed to meet unprecedented demand for AI compute.
Fairwater departs from traditional cloud datacenter design. It uses a single flat network that integrates hundreds of thousands of NVIDIA GB200 and GB300 (Blackwell) GPUs into one massive supercomputer. The system is built for all types of AI workloads including pre-training, fine-tuning, RL, and synthetic data generation. A dedicated AI WAN connects all Fairwater sites into an elastic, fungible workload system that maximizes GPU utilization.
To overcome physical limits like speed-of-light latency, Fairwater maximizes compute density. It uses closed-loop liquid cooling that requires no evaporation and lasts more than six years, enabling extremely dense racks (140 kW per rack). A two-story building layout shortens cable runs so every GPU stays tightly connected.
The Atlanta location was chosen for reliable grid power: it delivers four-nines availability at three-nines cost. Microsoft removed the need for on-site generators and UPS systems by depending on high-availability utility power and by working with partners to stabilize power demands from large AI jobs. They also use software, GPU-level power controls, and on-site energy storage to smooth out power fluctuations.
Inside the datacenter, each rack contains up to 72 NVIDIA Blackwell GPUs connected with NVLink and over 14 TB of pooled memory per GPU. Two-tier Ethernet networking allows hundreds of thousands of GPUs to act as a single system with 800 Gbps connectivity, using SONiC to avoid vendor lock-in. Packet-level optimizations provide better congestion control, faster retransmission, and balanced load distribution.
Because frontier models now exceed what any single site can handle, Microsoft built a continent-scale optical AI WAN. They added 120,000 miles of new fiber across the U.S., enabling different generations of supercomputers to interconnect and operate as one AI superfactory. Workloads are routed intelligently based on scale-up, scale-out, or cross-site needs.
The Atlanta Fairwater expansion reflects Microsoft’s experience running some of the world’s largest training jobs, combining breakthroughs in density, cooling, networking, and power. The result is flexible, high-performance infrastructure for all modern AI workloads and easier integration of AI across customer applications.


