NVIDIA DGX SYSTEMS ARE SPECIFICALLY DESIGNED FOR LEADING KI RESEARCH

Companies today need an end-to-end strategy for AI innovation to reduce time to insight and unlock new business opportunities. To stay ahead of the competition, they also need to build a streamlined AI development workflow that supports rapid prototyping, frequent runs and continuous feedback, as well as a robust infrastructure that can scale in an enterprise production environment.

NVIDIA DGX™ systems are purpose-built for the demands of enterprise AI and data science, offering the fastest start to AI development, effortless productivity and revolutionary performance - for insights in hours, not months.
AI-savvy practitioners, or DGXperts, are available with every DGX system.

With their extensive track record of field-proven implementations, they provide prescriptive planning, deployment and optimisation expertise to accelerate your AI transformation.

OPTIMISED SOFTWARE STACK

Optimised AI software, including a base operating system tuned for AI, simplifies deployment and allows you to become productive in hours rather than months.

UNSURPASSED KI LEADERSHIP

DGX is the core building block of a number of supercomputers in the TOP500 and has been adopted by many leading companies.

SCALABLE KI CLUSTERS

The architecture is proven for scalability across multiple nodes and was developed with industry leaders in storage, compute and networking.

ACCESS TO KI EXPERTISE

The NVIDIA DGXperts are a global team of more than 14,000 AI-savvy experts who have built a wealth of experience over the past decade to help you maximise the value of your DGX investment.

THE WORLD'S FIRST PORTFOLIO OF PURPOSE-BUILT DEEP LEARNING SYSTEMS

Inspired by the demands of deep learning and analytics, NVIDIA DGX™ systems are based on NVIDIA's revolutionary new Volta™ GPU platform. Combined with innovative GPU-optimised software and simplified management, these fully integrated solutions deliver breakthrough performance and results.

NVIDIA DGX systems are designed to give data scientists the most powerful tools for AI research - from their desktops to the data centre and the cloud.

EXPERIMENT FASTER, TRAIN LARGER MODELS AND GAIN USEFUL INSIGHTS - FROM DAY ONE.

DGX A100 Image

DGX A100

With the fastest I/O architecture, the NVIDIA DGX A100 is the uni- versal system for the entire AI infrastructure, from analysis to training to inference. It sets new standards for compute density, delivering 5 petaFLOPS of AI performance in a single, unified system that does it all.

Data sheet
DGX Station A100 Image

NVIDIA DGX STATION A100

NVIDIA DGX Station A100 brings AI supercomputing to data science teams, providing data centre technology without a data centre or additional IT infrastructure. Designed for multiple, concurrent users with server-grade components in an office-friendly design factor.

Data sheet
DGX Pod Image

NVIDIA DGX POD™

AI IS HERE - IS YOUR DATA CENTRE READY?

The use of AI by businesses is growing exponentially. With the need to deliver better customer experiences, optimise business spend, improve clinical outcomes or expand research and development capabilities, companies are investing in AI infrastructure to gain insights faster. For businesses that need the shortest path to large-scale AI innovation, NVIDIA DGX SuperPOD™ is the out-of-the-box hardware, software and service offering that removes the doubt from creating and deploying AI infrastructure.

NVIDIA DGX POD
DGX Superpod Image

NVIDIA DGX SUPERPOD™

WORLD-CLASS KI INFRASTRUCTURE

The use of AI by businesses is growing exponentially. With the need to deliver better customer experiences, optimise business spend, improve clinical outcomes or expand research and development capabilities, companies are investing in the
AI infrastructure to gain insights faster. For companies looking for the shortest path to large-scale AI innovation, NVIDIA DGX SuperPOD™ is the out-of-the-box hardware, software and service offering that removes the ambiguity from AI infrastructure creation and deployment.

Data sheet

SEE YOUR WORK REALISED

Whether it's advancing scientific discoveries or shaping tomorrow's scientists, university faculty and researchers are solving some of the biggest challenges facing the world today. NVIDIA DGX systems, the world's first portfolio of purpose-built AI supercomputers, are designed to give scientists the most powerful tools for
AI research - tools that span from the desktop to the data centre to the cloud. Powered by the revolutionary NVIDIA Volta GPU platform, DGX systems propel research computing to the next wave of breakthroughs.

DGX systems support deep-learning training, inference and accelerated analytics in a pre-optimised, integrated solution, delivering unparalleled performance that enables researchers and students to iterate and innovate faster. In addition, campus IT organisations can implement quickly - in as little as two hours instead of months - reducing development time and enabling greater productivity and faster insights.

PRODUCT ANNOUNCEMENT NVIDIA ON THE OCCASION OF THE SUPERCOMPUTING FAIR SC20
- NVIDIA A100 WITH 80GB HBM2E - 

SC20 - NVIDIA unveiled the 80GB version of the A100 graphics processing unit (GPU) at SC20, NVIDIA® A100 80GB GPU - the latest innovation for the NVIDIA HGX™ AI supercomputing platform - with twice the memory of the previous model, providing researchers and engineers with unprecedented speed and performance to unlock the next wave of AI and scientific breakthroughs.

Based on the company's Ampere graphics architecture, the new A100 with HBM2e technology doubles the high-width memory of the A100 40GB GPU to 80GB and offers memory bandwidth of over 2 terabytes per second. This allows data to be quickly fed into the A100, the world's fastest data centre GPU. Researchers can accelerate their applications even faster and take on even larger models and datasets.

The NVIDIA A100 80GB GPU is available in NVIDIA DGX™ A100 and NVIDIA DGX Station™ A100 systems, which were also announced in this frame and are expected to ship this quarter.

Leading system vendors including Atos, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta and Supermicro are expected to offer systems equipped with HGX A100 integrated baseboards in four- or eight-GPU configurations with A100 80GB in the first half of 2021.
Nvidia A100 80GB Image
 Nvidia A100 with 80GB HBM2e (image: Nvidia)

THE FEATURES OF THE GPU A100 80GB INCLUDE:

Like its predecessor, the GPU can be partitioned into up to seven GPU instances, each with 10GB of memory, with its multi-instance GPU (MIG) technology. This provides approximately secure hardware isolation and maximises GPU utilisation for a variety of smaller workloads.
  • Third-generation tensor cores: Compared to the "Volta" generation, they offer up to 20 times the AI throughput with the TF32 format, as well as 2.5x FP64 for HPC, 20x INT8 for AI inference and support for the BF16 data format.
  • Larger, faster HBM2e GPU memory
  • MIG technology: Doubles the memory per isolated instance and offers up to seven MIGs with 10 GB each.
  • Third-generation NVLink and NVSwitch: Provides twice the GPU-to-GPU bandwidth of previous-generation interconnect technology, so data transfer to the GPU is accelerated up to 600 gigabytes per second.

CHOOSE THE RIGHT NVIDIA DATA CENTRE PRODUCT FOR YOU.

Accelerated computing is becoming more common across all industries and in large production environments. As new computing demands exceed the capabilities of traditional CPU servers, organisations need to optimise their data centres - acceleration is a must. NVIDIA's data centre platform is the world's leading solution for accelerated computing and is used by the largest supercomputing centres and enterprises. It enables breakthrough performance with fewer, more powerful servers, delivering new insights faster while saving money. With a variety of GPUs, the platform accelerates a wide range of workloads, from AI training and inference to scientific computing and virtual desktop infrastructure (VDI) applications. To achieve optimal performance, it is important to identify the ideal GPU for a given workload. Learn about NVIDIA GPUs and their corresponding workloads in the table below. Find out which GPU will deliver the best results for your business.

NVIDIA ACCELERATOR SPECIFICATION COMPARISON

PRODUCT COMPARISON

The only difference between the 40GB and 80GB versions of the A100 is the memory capacity and memory bandwidth. Both models come with a mostly enabled GA100 GPU with 108 active SMs and a boost clock of 1.41 GHz. Likewise, the TDPs remain unchanged between the two models. So for pure computing throughput on paper, there is no difference between the accelerators.

Instead, the improvements are in memory capacity as well as increased memory bandwidth. The original A100 variant 40GB, NVIDIA equipped with six 8GB stacks of HBM2 memory, one of which was disabled for yield reasons. This gave the original A100 40GB of memory and a memory bandwidth of just under 1.6TB/second.

In the newer A100 80GB, NVIDIA retains the same configuration with 5 of 6 memory stacks enabled, but the memory itself has been replaced with newer HBM2E memory. HBM2E is the informal name for the latest update to the HBM2 memory standard, which defined a new maximum memory speed of 3.2Gbps/pin in February this year. In conjunction with this frequency improvement, improvements in manufacturing have also allowed memory manufacturers to double the capacity of the memory, from 1GB/die to 2GB/die. The net result is that HBM2E offers both greater capacity and greater bandwidth, two things NVIDIA is taking advantage of here.

NVIDIA DGX SUPERPOD
The solution for businesses

The fastest path to large-scale AI innovation

Nvidia DGX Superpod Image

FIRST-CLASS KI INFRASTRUCTURE

The use of AI in businesses is growing exponentially. With the need to deliver better customer experiences, optimise business spend, improve clinical outcomes or expand research and development capabilities, companies are investing in AI infrastructure to gain insights faster. For companies that need the shortest path to large-scale AI innovation, NVIDIA DGX SuperPOD™ is the out-of-the-box hardware, software and service offering that removes the ambiguity of creating and deploying AI infrastructure.

Data sheet

A BUSINESS SOLUTION FOR THE ENTIRE LIFE CYCLE

The NVIDIA DGX SuperPOD enterprise solution incorporates best practices and expertise from the world's largest AI implementations to solve the most challenging enterprise AI problems. For enterprises that need a reliable, out-of-the-box approach to large-scale AI innovation, we've leveraged our industry-leading reference architecture and integrated it into a comprehensive solution and service offering. The NVIDIA DGX SuperPOD enterprise solution provides a comprehensive service offering that enables any organisation in need of world-class infrastructure to achieve industry-proven results in weeks, not months. Plus, a professional implementation that intelligently integrates with your business so your team can deliver results faster.

Nvidia Software Stack Image

DELIVERING SUPERCOMPUTING SOLUTIONS IN THE SHORTEST TIME POSSIBLE

The DGX SuperPOD Enterprise solution is based on the DGX SuperPod reference architecture - the world's fastest commercially available AI infrastructure, as demonstrated by the MLPerf benchmark suite.
With the DGX SuperPOD Enterprise solution, your organisation can acquire and rapidly deploy its own world-class AI infrastructure with NVIDIA support, so your data science teams are up and running from day one.

INTELLIGENT ADAPTATION AND INTEGRATION

Your data science teams need the right tools, the right platform and the right infrastructure to optimise AI workflows and reduce time to insight. And your IT teams need the right partner to integrate the AI infrastructure into your existing environment.

With the DGX SuperPOD enterprise solution, our professional services team can help you customise our proven infrastructure solution to your environment. This includes flexible deployment options tailored to your business.

BENEFIT FROM A COMPLETE RANGE OF SOLUTIONS

More than just a reference architecture:
Your team needs a shorter path to make AI infrastructure work for your business processes. With the DGX SuperPOD enterprise solution, you benefit from comprehensive data centre planning services and infrastructure deployment expertise. From dimen- sioning, installation and training to ongoing optimisation and beyond, your deployment is faster with NVIDIA support.