News from the Grace Lineup!

NVIDIA SuperPOD with GB200
The NVIDIA GB200 Grace Blackwell Superchip combines two NVIDIA Blackwell Tensor Core GPUs and a Grace CPU and can be scaled to the GB200 NVL72, a massive 72-GPU system connected via NVIDIA® NVLink®, to deliver 30x faster real-time inference for large-scale language models.

NVIDIA GRACE HOPPER SUPERCHIP
NVIDIA Grace Hopper™ Superchip combines the Grace and Hopper architectures using NVIDIA® NVLink®-C2C to provide a coherent CPU and GPU memory model for accelerated AI and High-Performance Computing (HPC) applications.
System designs for digital twins, artificial intelligence and high-performance computers

NVIDIA OVX™
for digital twins and NVIDIA Omniverse™
NVIDIA Grace CPU Superchip
NVIDIA GPUs
NVIDIA BlueField®-3

NVIDIA HGX™
for HPC
NVIDIA Grace CPU Superchip
NVIDIA BlueField-3
OEM-defined input/output (IO)

NVIDIA HGX
for AI training, inference, and HPC
NVIDIA Grace Hopper Superchip CPU + GPU
NVIDIA BlueField-3
OEM-defined IO / fourth-generation NVLink
Find out more about the latest technical innovations!
Acceleration of CPU-to-GPU connections with NVLink-C2C
Solving the biggest AI and HPC problems requires high capacity, high bandwidth memory (HBM). The fourth generation of NVIDIA NVLink-C2C provides 900 gigabytes per second (GB/s) of bidirectional bandwidth between the NVIDIA Grace CPU and NVIDIA GPUs. The link provides a unified, cache-coherent memory address space that combines system and HBM GPU memory for simplified programmability. This coherent, high-bandwidth connection between CPU and GPUs is the key to accelerating tomorrow's most complex problems.
USE HIGH BANDWIDTH CPU MEMORY WITH LPDDR5X
NVIDIA Grace is the first server CPU to leverage LPDDR5X memory with server-class reliability through mechanisms such as error correction code (ECC) to meet the needs of the data centre, while delivering two times higher memory bandwidth and up to ten times better energy efficiency compared to currently available server memory. The NVIDIA Grace CPU integrates ARM Neoverse V2 compute units into an NVIDIA-developed Scalable Coherency Fabric to deliver high performance in an advantageous design to facilitate the work of scientists and researchers.
MORE POWER AND EFFICIENCY WITH ARM NEOVERSE V2 CORES
Even though the parallel computing capabilities of GPUs continue to advance, workloads can still be limited by serial tasks running on the CPU. A fast and efficient CPU is a critical component of the system design to enable optimal workload acceleration. The NVIDIA Grace CPU integrates Arm Neoverse V2 cores to deliver high performance in a low-power system, making work easier for scientists and researchers.
MORE GENERATIVE KI WITH HBM3 AND HBM3E GPU MEMORY
Generative AI is memory and compute intensive. The NVIDIA GB200 superchip uses 380 GB of HBM memory, providing more than 4.5 times the GPU memory bandwidth of the NVIDIA H100 Tensor Core GPU. Grace Blackwell's high-bandwidth memory is combined with CPU memory via NVLink-C2C to provide nearly 800GB of fast-access memory for the GPU. This delivers the memory capacity and bandwidth required for the world's most complex generative AI and accelerated computing workloads.
TESTDRIVE NVIDIA Grace Hopper Superchip
Experience a ground-breaking CPU/GPU combination in perfect symbiosis.
Ideal for AI and HPC applications on a large scale.
Enquiry NVIDIA GRACE HOPPER SUPERCHIP REMOTE TEST
Thank you for your interest in a test drive or a demo at sysGen.
Do you require further information, a personal consultation or a quotation? Please send us an e-mail using our contact form! We look forward to hearing from you and will get in touch with you as soon as possible.