H100 PCI-e Test Drive

Try before you buy: 
​​​​​​​Supermicro 4U DP 8-GPU Server
mit 2x NVIDIA H100 PCI-e GPU 

Accelerate HPC and AI Workloads Today

The NVIDIA H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC. With NVIDIA AI Enterprise for streamlined AI development and deployment, NVIDIA NVLINK Switch System direct communication between up to 256 GPUs, H100 accelerates everything from exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models, down to right-sized Multi-Instance GPU (MIG) partitions.
​​​​​​​

Test drive Supermicro systems with H100 GPUs to experience an end-to-end AI and HPC data center platform.
REGISTER NOWMORE INFO

Quick time-to-value

Turnkey solution with NVIDIA AI Enterprise and NVIDIA H100 integrated NVIDIA-Certified Systems for an AI-ready platform.

Boost performance and efficiency

Fastest AI training on large models with H100’s Transformer Engine and highest efficiency and security with 2nd gen MIG and confidential computing.
​​​​​​​

Scale with ease

Accelerate a wide range of use cases with the flexibility to service the entire AI pipeline on one infrastructure.
​​​​​​​

Enterprise ready

Streamline enterprise AI with guaranteed support, certified to run on broadly adopted enterprise platforms in the data center.

SPECIFICATION OF THE TESTING ENVIRONMENT

In cooperation with NVIDIA and Supermicro we offer, as an Elite Partner, the NVIDIA H100 PCIe GPU Test Drive Programme. Experience free of charge and without obligation in our test centre what the H100 can do in a PCIe 5.0 environment with fast storage.  Interested in a test? Please fill out our request form.
Model name
CPU
TDP
RAM
GPUs
NVMe
Network
IPMI
Form factor
Fields of application
Power connection
Data
A+ Server AS-4125GS-TNRT

2x 9654 Dual Socket SP5 (AMD EPYC 9004 Series)

Total 192 cores | 384 threads

350W per CPU
768GB RAM DDR5-4800 MHz
2x NVIDIA H100 Tensor Core GPU 80GB
1x 1.9TB M.2 Gen 4.0 NVMe SSD
2x 3.84TB U.2 Gen 4.0 NVMe SSD
2x 10GbE RJ45
IPMI 2.0 over LAN and KVM-over-LAN support
4 height units
AI / Deep Learning  |  High Performance Computing
4x 2200W redundant power supplies

NVIDIA H100 PCIe

Unprecedented performance, scalability, and security for mainstream servers
The NVIDIA H100 Tensor Core GPU enables a giant leap for large-scale AI performance, scalability and security for any data centre and includes the NVIDIA AI Enterprise software suite to optimise AI development and deployment. With the NVIDIA NVLINK Switch System for direct communication between up to 256 GPUs, the H100 accelerates exascale workloads with a dedicated Transformer Engine for language models with trillions of parameters. For small tasks, the H100 can be partitioned to right-side multi-instance GPU (MIG) partitions. With Hopper Confidential Computing, this scalable computing power can secure sensitive applications in a shared data centre infrastructure. NVIDIA AI Enterprise Software Suite shortens development time development and simplifies deployment of AI workloads, making the H100 the most powerful end-to-end AI and HPC data centre platform.
HIGHEST AI AND HPC MAINSTREAM PERFORMANCE​
3.2PF FP8 (5X) | 1.6PF FP16 (2.5X) | 800TF TF32 (2.5X) | 48TF FP64​ (2.5X)
6X faster Dynamic Programming with DPX Instructions
2TB/s , 80GB HBM2e memory​

HIGHEST COMPUTE ENERGY EFFICIENCY
Configurable TDP - 150W to 350W
2 Slot FHFL mainstream form factor

HIGHEST UTILIZATION EFFICIENCY AND SECURITY
7 Fully isolated & secured instances, guaranteed QoS​
2 nd Gen MIG | Confidential Computing

HIGHEST PERFORMING SERVER CONNECTIVITY
128GB/s PCI Gen5​
600 GB/s GPU-2-GPU connectivity (5X PCIe Gen5)​
​​​​​​​up to 2 GPUs with NVLink Bridge
MORE INFO

Try Before You Buy

Test drive Supermicro systems with H100 GPUs to experience unprecedented performance, scalability and security for every data center.
Secure your test via remote access