Nvidia has introduced its Spectrum-XGS Ethernet technology, ushering in a new era in AI infrastructure. This solution transforms data centers across cities and even continents into a single, unified supercomputer. This enables the creation of “giga-scale AI superfactories.”
Nvidia to Converge Data Centers
In traditional data centers, the capacity of a single facility is a limiting factor. To overcome this limitation, Nvidia has developed “distance-aware” networking. This technology enables predictable and low-latency communication not only within a campus but also across cities and continents.
Announced at the Hot Chips 2025 event, Spectrum-XGS does not require completely new hardware. It works on existing Spectrum-X switches and ConnectX SuperNICs with software and firmware updates.
The system offers features such as distance-adaptive congestion control, precise latency management, and end-to-end telemetry to manage network traffic more efficiently. Nvidia says these innovations nearly double the communication speed for distributed AI training across multiple GPUs and multiple nodes.
The company describes this approach as “inter-hub scaling,” which connects different facilities after vertical scaling within the server and horizontal scaling within the data center. This technology transforms data centers into a unified architecture offering massive AI capacity.
CoreWeave is one of the first companies to implement this transformation. By combining its facilities across regions with Spectrum-XGS, CoreWeave plans to offer its customers a massive infrastructure that operates like a single supercomputer.
{{user}} {{datetime}}
{{text}}