The next AI milestone is ready

Working closely together, NVIDIA and Supermicro offer the most powerful and efficient NVIDIA Certified systems. These systems are designed for small enterprises to large AI training clusters and utilize the new NVIDIA H100 Tensor Core GPUs. With 9x the training performance of the previous generation, training times can be reduced from a week to as little as 20 hours. Supermicro systems with the new H100 PCI-E and HGX H100 GPUs, as well as the newly announced L40 GPU, offer PCI-E Gen5 connectivity, fourth-generation NVLink and NVLink Network for scale-out, CNX cards, GPUDirect RDMA, and storage with NVIDIA Magnum IO and NVIDIA AI Enterprise software.

Supermicro has developed new servers specifically for the NVIDIA H100 Tensor Core GPUs. These range from an 8U rackmount system for high-performance computing (HPC) and power-hungry applications to a tower-format power workstation. These servers are equipped with Nvidia's H100 PCIe or H100 SXM GPUs, and select current-generation systems are certified with the Nvidia H100 GPUs, including the SYS-420GP-TNR GPU server and SYS-740GP-TNRT workstation.

Production AI with NVIDIA H100 and NVIDIA AI Enterprise

Develop and deploy enterprise AI with unmatched performance, security, and scalability

The Cloud-based end-to-end suite NVIDIA AI Enterprise is now part of the scope of delivery and specifically optimized for enterprise AI use. The combination of AI and data analytics software with a tuned GPU is said to offer increased performance, security and scalability in AI development. The H100 package for mainstream servers includes a five-year subscription with enterprise support for the NVIDIA AI Enterprise software suite, simplifying the adoption of high-performance AI and giving enterprises access to the AI frameworks and tools they need to create H100-accelerated AI workflows such as AI chatbots, recommendation engines and vision AI.

The New SUPERMICRO H100 Systems

Next-Gen 8U Universal GPU System

Suitable for today's largest AI training models and HPC, with superior thermal capacity with reduced acoustics, more I/O and large memory
  • GPU: NVIDIA HGX H100 8-GPU (Hopper)
  • GPU Feature Set: With 80 billion transistors, the H100 is the world's most advanced chip ever built, delivering up to 9x faster performance for AI training.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: Up to 24 hot-swap NVMe U.2

Next-Gen 4U/5U Universal GPU System

Optimized for AI Inference workloads and use cases. Modular design for ultimate flexibility.
  • GPU: NVIDIA HGX H100 4-GPU
  • GPU Feature Set : H100 HGX can speed up AI inference by up to 30x compared to the previous generation.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives : Up to 8 hot-swap NVMe U.2 connected to PCI-E switch or 10 hot-swap 2.5" SATA/SAS

NEXT-GEN 4U 10GPU PCI-E GEN 5 SYSTEM

Flexible design for AI and graphics-intensive workloads, supports up to 10 NVIDIA GPUs.
  • GPU: Up to 10 double-width PCI-E GPUs per node
  • GPU Feature Set: The NVIDIA L40 PCI-E GPUs in this system are ideal for media and graphics workloads.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 24 hot-swap bays

NEXT-GEN 4U 4GPU SYSTEM

Optimized for 3D Metaverse collaboration, data scientists, and content creators. Available in rackmount and workstation form factors
  • GPU: NVIDIA PCI-E H100 4-GPU
  • GPU Feature Set: NVIDIA H100 GPUs are the world's first accelerators with confidential compute capabilities that increase confidence in secure collaboration.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 8 hot-swap 3.5" drive bays, up to 8 NVMe drives, 2x M.2 (SATA or NVMe)

Next-Gen 8U Universal GPU System
(Coming soon)

Suitable for today's largest AI training models and HPC, with superior thermal capacity with reduced acoustics, more I/O and large memory
  • GPU: NVIDIA HGX H100 8-GPU ( Hopper)
  • GPU Feature Set: With 80 billion transistors, the H100 is the world's most advanced chip ever built, delivering up to 9x faster performance for AI training.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: Up to 24 hot-swap NVMe U.2

Next-Gen 4U/5U Universal GPU System
(Coming Soon)

Optimized for AI Inference workloads and use cases. Modular design for ultimate flexibility.
  • GPU: NVIDIA HGX H100 4-GPU
  • GPU Feature Set : H100 HGX can speed up AI inference by up to 30x compared to the previous generation.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives : Up to 8 hot-swap NVMe U.2 connected to PCI-E switch or 10 hot-swap 2.5" SATA/SAS

next-Gen 4U 10GPU PCI-E Gen 5 System
(Coming soon
)

Flexible design for AI and graphics-intensive workloads, supports up to 10 NVIDIA GPUs.
  • GPU: Up to 10 double-width PCI-E GPUs per node
  • GPU Feature Set: The NVIDIA L40 PCI-E GPUs in this system are ideal for media and graphics workloads.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 24 hot-swap bays

next-Gen 4U 4GPU System
(Coming soon
)

Optimized for 3D Metaverse collaboration, data scientists, and content creators. Available in rackmount and workstation form factors
  • GPU: NVIDIA PCI-E H100 4-GPU
  • GPU Feature Set: NVIDIA H100 GPUs are the world's first accelerators with confidential compute capabilities that increase confidence in secure collaboration.
  • CPU: Dual Processors
  • Memory: ECC DDR5 up to 4800MT/s
  • Drives: 8 hot-swap 3.5" drive bays, up to 8 NVMe drives, 2x M.2 (SATA or NVMe)