• NVIDIA GRACE™ CPU SETS NEW STANDARDS FOR PERFORMANCE AND EFFICIENCY
    NVIDIA GRACE™ CPU SETS NEW STANDARDS FOR PERFORMANCE AND EFFICIENCY

    NVIDIA GRACE HOPPER SUPERCHIP REMOTE TEST

    Designed for the performance and efficiency required for modern AI data centres

THE NVIDIA GRACE™ CPU

CUSTOMISED PERFORMANCE AND EFFICIENCY FOR MODERN KI DATA CENTRES

The ever-increasing complexity of AI models and their requirements has greatly increased the importance of accelerated computing and energy efficiency in the data centre. In this context, the NVIDIA Grace™ CPU marks a milestone. As a pioneering Arm® CPU, it sets new standards by delivering unrivalled performance and efficiency without compromise. The flexibility of the Grace CPU is remarkable: it can seamlessly interface with a GPU to optimise accelerated computing, or stand alone as a powerful and efficient CPU.

The versatility of the NVIDIA Grace CPU spans multiple configurations to meet the diverse needs of modern data centres. From high-performance servers to compute-intensive applications, it provides a solid foundation for next-generation data centres. With its ability to adapt to different deployment scenarios, the Grace CPU enables optimal utilisation of resources, helping to increase efficiency and improve performance.

News from the Grace Lineup!

NVIDIA SuperPOD with GB200
NVIDIA SuperPOD with GB200

NVIDIA SuperPOD with GB200

The NVIDIA GB200 Grace Blackwell Superchip combines two NVIDIA Blackwell Tensor Core GPUs and a Grace CPU and can be scaled to the GB200 NVL72, a massive 72-GPU system connected via NVIDIA® NVLink®, to deliver 30x faster real-time inference for large-scale language models.

NVIDIA GRACE HOPPER SUPERCHIP
NVIDIA GRACE HOPPER SUPERCHIP

NVIDIA GRACE HOPPER SUPERCHIP

NVIDIA Grace Hopper™ Superchip combines the Grace and Hopper architectures using NVIDIA® NVLink®-C2C to provide a coherent CPU and GPU memory model for accelerated AI and High-Performance Computing (HPC) applications.

Discover Grace reference designs for modern data centre workloads!

The complexity and size of AI models is increasing rapidly. They are enhancing deep recommender systems with tens of terabytes of data, improving conversational AI with hundreds of billions of parameters and enabling new scientific discoveries. Scaling these massive models requires new architectures with fast access to a large memory pool and tight coupling of CPU and GPU. The NVIDIA Grace™ CPU provides the high performance, energy efficiency and high-bandwidth connectivity that can be used in different configurations for different data centre requirements.

System designs for digital twins, artificial intelligence and high-performance computers

NVIDIA OVX™

NVIDIA OVX™

for digital twins and NVIDIA Omniverse™

NVIDIA Grace CPU Superchip
NVIDIA GPUs
NVIDIA BlueField®-3

NVIDIA HGX™

NVIDIA HGX™

for HPC

NVIDIA Grace CPU Superchip
NVIDIA BlueField-3
OEM-defined input/output (IO)

NVIDIA HGX

NVIDIA HGX

for AI training, inference, and HPC

NVIDIA Grace Hopper Superchip CPU + GPU
NVIDIA BlueField-3
OEM-defined IO / fourth-generation NVLink

Find out more about the latest technical innovations!

Acceleration of CPU-to-GPU connections with NVLink-C2C

Solving the biggest AI and HPC problems requires high capacity, high bandwidth memory (HBM). The fourth generation of NVIDIA NVLink-C2C provides 900 gigabytes per second (GB/s) of bidirectional bandwidth between the NVIDIA Grace CPU and NVIDIA GPUs. The link provides a unified, cache-coherent memory address space that combines system and HBM GPU memory for simplified programmability. This coherent, high-bandwidth connection between CPU and GPUs is the key to accelerating tomorrow's most complex problems.

USE HIGH BANDWIDTH CPU MEMORY WITH LPDDR5X

NVIDIA Grace is the first server CPU to leverage LPDDR5X memory with server-class reliability through mechanisms such as error correction code (ECC) to meet the needs of the data centre, while delivering two times higher memory bandwidth and up to ten times better energy efficiency compared to currently available server memory. The NVIDIA Grace CPU integrates ARM Neoverse V2 compute units into an NVIDIA-developed Scalable Coherency Fabric to deliver high performance in an advantageous design to facilitate the work of scientists and researchers.

MORE POWER AND EFFICIENCY WITH ARM NEOVERSE V2 CORES

Even though the parallel computing capabilities of GPUs continue to advance, workloads can still be limited by serial tasks running on the CPU. A fast and efficient CPU is a critical component of the system design to enable optimal workload acceleration. The NVIDIA Grace CPU integrates Arm Neoverse V2 cores to deliver high performance in a low-power system, making work easier for scientists and researchers.

MORE GENERATIVE KI WITH HBM3 AND HBM3E GPU MEMORY

Generative AI is memory and compute intensive. The NVIDIA GB200 superchip uses 380 GB of HBM memory, providing more than 4.5 times the GPU memory bandwidth of the NVIDIA H100 Tensor Core GPU. Grace Blackwell's high-bandwidth memory is combined with CPU memory via NVLink-C2C to provide nearly 800GB of fast-access memory for the GPU. This delivers the memory capacity and bandwidth required for the world's most complex generative AI and accelerated computing workloads.

TESTDRIVE NVIDIA Grace Hopper Superchip

Experience a ground-breaking CPU/GPU combination in perfect symbiosis.
Ideal for AI and HPC applications on a large scale.

Further information

Get more information and discover how this breakthrough CPU/GPU combination can optimise your AI and HPC applications. Click on the buttons for more details.

Enquiry NVIDIA GRACE HOPPER SUPERCHIP REMOTE TEST

Thank you for your interest in a test drive or a demo at sysGen.

Do you require further information, a personal consultation or a quotation? Please send us an e-mail using our contact form! We look forward to hearing from you and will get in touch with you as soon as possible.

Ihre optimale Website-Nutzung

Diese Website verwendet Cookies und bindet externe Medien ein. Mit dem Klick auf „✓ Alles akzeptieren“ entscheiden Sie sich für eine optimale Web-Erfahrung und willigen ein, dass Ihnen externe Inhalte angezeigt werden können. Auf „Einstellungen“ erfahren Sie mehr darüber und können persönliche Präferenzen festlegen. Mehr Informationen finden Sie in unserer Datenschutzerklärung.

Detailinformationen zu Cookies & externer Mediennutzung

Externe Medien sind z.B. Videos oder iFrames von anderen Plattformen, die auf dieser Website eingebunden werden. Bei den Cookies handelt es sich um anonymisierte Informationen über Ihren Besuch dieser Website, die die Nutzung für Sie angenehmer machen.

Damit die Website optimal funktioniert, müssen Sie Ihre aktive Zustimmung für die Verwendung dieser Cookies geben. Sie können hier Ihre persönlichen Einstellungen selbst festlegen.

Noch Fragen? Erfahren Sie mehr über Ihre Rechte als Nutzer in der Datenschutzerklärung und Impressum!

Ihre Cookie Einstellungen wurden gespeichert.