NVIDIA DGX Station A100
  • AI workgroup server with 2.5 petaFLOPS of performance that your team can use without restriction - for training, inference, and data analysis.

  • server-ready, plug-and-go, and requires no data center power or cooling.

  • world-class AI platform, with no complicated installation or IT help.

  • the world's only workstation-style system with four fully networked NVIDIA A100 Tensor Core GPUs and up to 320 gigabytes (GB) of GPU memory.

  • provides a fast path to AI transformation with NVIDIA's expertise and experience

Overview

The NVIDIA DGX Station A100 brings AI supercomputing to data science teams, providing data center technology without a data center or additional IT infrastructure. The DGX Station A100 is designed for users working collaboratively and simultaneously on compute-intensive AI applications. It is the only system with four fully interconnected and multi-instance GPU (MIG)-enabled NVIDIA A100 Tensor Core GPUs with up to 320 GB of total GPU memory. The DGX Station A100 uses server-grade components in an office-friendly form factor that plugs into a standard electrical outlet.

KI SUPERCOMPUTING FOR DATA SCIENCE TEAMS

The DGX Station A100 can run training, inference, and analysis workloads in parallel and, with MIG technology, can provide up to28 separate GPU devices for individual users and jobs. In addition, it shares the same fully optimized NVIDIA DGX™ software stack as all DGX systems, providing maximum performance and full interoperability with DGX-based infrastructures, from single systems to NVIDIA DGX POD™ and NVIDIA DGX SuperPOD™, making DGX Station A100 an ideal platform for teams from all organizations and sizes.

DATA CENTER PERFORMANCE WITHOUT THE DATA CENTER

The NVIDIA DGX Station A100 provides a datacenter-class AI server in a workstation form factor suitable for use in a standard office environment without dedicated power and cooling. The design includes four extremely powerful NVIDIA A100 Tensor Core GPUs, a contemporary server-grade CPU, exceedingly fast NVMe storage, and state-of-the-art PCIe Gen4 buses. The DGX Station A100 also includes the same Baseboard Management Controller (BMC) as the NVIDIA DGX A100, allowing system administrators to perform all necessary tasks via a remote connection.

A KI DEVICE YOU CAN PLACE ANYWHERE

The NVIDIA DGX Station A100 is designed for today's agile data science teams working in corporate offices, labs, research facilities, or from home. While installing large-scale AI infrastructure requires significant IT investment and large data centers with industrial-grade power and cooling, the DGX Station A100 simply plugs into a standard electrical outlet wherever your team's workstation is located. Thanks to the innovative cooling-based design, the workstation stays comfortably cool. With a simple workstation setup, you can have a world-class AI platform up and running in minutes.

SPECIFICATION

Components
DGX Station A100 - 320GB
GPU
4x NVIDIA A100 80 GB GPUs
GPU Memory
320GB total
Performance
2.5 petaFLOPS AI5
petaOPS INT8



System Power Usage
4x NVIDIA A100 40 GB GPUs
CPU
Single AMD 7742, 64 cores, 2.25 GHz (base)-3.4 GHz (max boost)
Storage
OS: 1x 1.92 TB NVME driveInternal
storage: 7.68 TBU
.2 NVME drive

DGX Display Adapter
4 GB GPU memory, 4x Mini DisplayPort
Network
Dual-port 10Gbase-T Ethernet LANSingle-port
1Gbase-T Ethernet BMC management port



System Acoustic
<37 dB
System Weight
91.0 lbs (43.1 kgs)
System Dimensions
Height: 25.1 in (639 mm
)Width: 10.1 in (256 mm)
Length: 20.4 in (518 mm)

Packaged System Weight
127.7 lbs (57.93 kgs)
Operating Temperature Range
5-35 ºC (41-95 ºF)
Software
Ubuntu Linux OS