The fundamental architecture supporting the global internet is undergoing a profound transformation, driven not by the exponential growth of data packets, but by the sheer energy demands of artificial intelligence. This shift is vividly illustrated by Equinix, the digital infrastructure giant, as it navigates the chasm between its legacy connectivity hubs and its new generation of high-power, liquid-cooled facilities designed specifically for AI. The story is one of evolution, moving from maximizing fiber density and cross-connects to optimizing thermal management and power delivery at an unprecedented scale.
The video offers an exclusive look inside two facilities on the same Silicon Valley campus: the venerable SV1 and the cutting-edge SV11, built to house massive AI clusters. [The host] spoke with Charlie Boyle, VP of DGX Systems at NVIDIA, inside the new facility, focusing on the engineering required to support the Grace Blackwell Superchip architecture. Equinix’s SV1, operating for a quarter century, represents the internet’s past—a colossal, cage-filled building famously referred to as the "center of the internet" for the West Coast.
SV1’s historical significance rests on its dense concentration of network carriers. Equinix pioneered the carrier-neutral colocation model, allowing hundreds of service providers to meet customers in one location, facilitating efficient traffic exchange and lowering latency. This density was the cornerstone of early internet innovation and financial connectivity. The facility remains vital today, a "living, working fossil" supporting everything from financial institutions to hyper-scalers. This dense interconnection point is crucial: "Something like over 90% of all West Coast internet traffic passes through this building." The ability to connect through multiple different pipes and service providers ensured efficiency and redundancy, defining the first era of digital infrastructure.
However, the demands of training massive AI models—Large Language Models and complex simulation workloads—have rendered the traditional air-cooled, space-optimized data center design obsolete. The challenge has shifted from maximizing the number of racks per square foot to maximizing power and cooling efficiency per rack. The raw compute power generated by modern GPUs creates immense heat loads that standard air cooling simply cannot handle.
This necessity gave rise to the SV11 facility, a blueprint for the future where thermal management dictates design. Inside SV11, the focus is squarely on supporting high-density AI clusters, exemplified by the NVIDIA DGX SuperPOD. NVIDIA VP Charlie Boyle explained that Equinix serves as their model deployment partner: "Equinix is our global deployment partner that's had SuperPODs in it everywhere. We wanted to build the exact same thing that our customers were going to get anywhere around the world at an Equinix facility." This standardization allows customers to deploy identical, proven infrastructure globally.
The architecture of the DGX SuperPOD reveals the severity of the power problem. The system, which links 72 GPUs to act as a single, enormous GPU, requires liquid cooling to function. Boyle emphasized that the system's power density necessitates this approach: "We couldn't build the GB200... without that liquid cooling because of the pure density on the system." This liquid cooling is delivered via specialized infrastructure—large pipes and cooling distribution units (CDUs) that circulate fluid up to massive chillers on the roof, far exceeding the capacity of traditional air handling units.
The need for power and cooling extends beyond the immediate server racks, fundamentally reshaping the entire facility's utility infrastructure. Equinix is actively investing in alternative energy sources to meet the surging demand.
This includes deploying large-scale, on-site fuel cells and pursuing agreements with next-generation nuclear providers. This diversified approach mitigates potential power constraints and supports sustainable growth. As Raouf Abdel, Executive Vice President of Global Operations at Equinix, noted in a recent announcement: "Access to round-the-clock electricity is critical to support the infrastructure that powers everything from AI-driven drug discovery to cloud-based video streaming."
Beyond power generation, the physical security and access control in these new facilities reflect the high-value nature of the assets they house. Strict protocols, including biometric scans and "man traps"—secure vestibules allowing only one person or vetted group entry at a time—ensure only authorized personnel can access the data halls, often requiring multiple layers of verification before reaching customer cages. The shift from SV1 to SV11 is not just an upgrade; it is a structural re-prioritization where energy resilience and thermal efficiency are the ultimate determinants of digital superiority. The facilities that can master the challenge of high-density power will be the ones that define the next decade of AI innovation.



