RECORD PERFORMANCE IN NETWORK COMMUNICATION

NVIDIA Quantum-2, the seventh generation of the NVIDIA InfiniBand architecture, provides AI developers and researchers with the fastest network performance and feature set available to tackle the world's most challenging tasks. NVIDIA Quantum-2 supports the world's leading supercomputing data centers with software-defined networking, in-network computing, power isolation, advanced acceleration engines, remote direct memory access (RDMA), and the highest speeds and data feeds of up to 400 Gb/s.

DATA SPEED

2XData throughput
400 gigabits per second

IMPROVED PERFORMANCE

4XMPI performance
New MPI All-to-All In-Network Computing Acceleration Engine

IMPROVED TOTAL COST OF OWNERSHIP

5XSwitchsystem capacity
>1.6 petabits per second (bidirectional)
with 2,048 NDR connections

READY FOR EXascale

6.5X
Higher scalability
Connection of >1M knots
with three jumps (Dragonfly+)

ACCELERATEDDEEP
LEARNING

32X
More AI acceleration
NVIDIA Sharp In-Networking
Computing Technology

PERFORMANCE WITH GREAT EFFECT

IMPROVEMENTS FOR SUPERCOMPUTERS AND
APPLICATIONS IN THE FIELDS OF HPC AND KI

Accelerated in-network computing

Today's high-performance computing (HPC), AI, and hyperscale infrastructures require faster interconnects and smarter networks to analyze data and run complex simulations faster and more efficiently. NVIDIA Quantum-2 enhances and extends in-network computing with preconfigured and programmable compute engines, such as NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARPv3™) third generation, Message Passing Interface (MPI) tag matching, MPI All-to-All, and programmable engines to provide the lowest cost per node and the best ROI.
Quantum 2 InfiniBand Performance Image

Performance Isolation

The NVIDIA Quantum-2 InfiniBand platform provides innovative proactive monitoring and congestion management to isolate traffic, virtually eliminate performance jitter, and provide predictive performance as if the application were running on a dedicated system.

Cloud-native supercomputing

The NVIDIA Cloud-Native Supercomputing Platform leverages NVIDIA® BlueField® data processing unit (DPU) architecture with NVIDIA Quantum-2 InfiniBand high-speed, low-latency networking. The solution delivers bare-metal performance, user management and isolation, data protection, on-demand high performance computing (HPC), and AI services - simply and securely.
Supercomputer Image

DELIVERING DATA AT THE SPEED OF LIGHT

Host Channel Adapter

The NVIDIA ConnectX-7 InfiniBand host channel adapter (HCA) with support for fourth- and fifth-generation PCIe is available in a variety of form factors and offers single or dual 400 Gb/s network ports.

The ConnectX-7 InfiniBand HCAs include advanced in-network computing capabilities and also additional programmable engines that enable pre-processing of data algorithms and offload application control paths to the network.

Switches with fixed configuration

NVIDIA Quantum-2 proprietary fixed-configuration switches feature 64 ports of 400 Gb/s or 128 ports of 200 Gb/s on physical 32 octal small form-factor (OSFP) ports. The compact 1U switch design is available in air-cooled and liquid-cooled versions that are internally or externally managed.

The NVIDIA Quantum-2 proprietary fixed-configuration switches provide aggregate bi-directional throughput of 51.2 terabits per second (Tb/s) and capacity of more than 66.5 billion packets per second.

Modular switches

NVIDIA Quantum-2 proprietary modular switches provide the following port configurations:

  • 2,048 ports at 400 Gb/s or 4,096 ports at 200 Gb/s
  • 1,024 ports at 400 Gb/s or 2,048 ports at 200 Gb/s
  • 512 ports at 400 Gb/s or 1,024 ports at 200 Gb/s

The largest modular switch has a total bi-directional throughput of 1.64 petabits per second, 5 times higher than the previous generation NVIDIA Quantum InfiniBand modular switch.

Transceivers and cables

NVIDIA Quantum-2 connectivity options provide maximum flexibility in building the topology of your choice. They include a variety of transceivers and multi-fiber push-on (MPO) connectors, active copper cables (ACCs), and direct-attach cables (DACs) with 1-2 or 1-4 splitting options.

Backward compatibility is also provided to connect new 400 Gb/s clusters to existing 200 Gb/s or 100 Gb/s-based infrastructures.

NVIDIA QUANTUM-2 INFINIBAND - PLATFORMS

LEARN MORE