Acceleration of HPC and AI workloads
The NVIDIA H100 Tensor Core GPU enables a leap of orders of magnitude for AI and HPC at scale. With NVIDIA AI Enterprise for optimised AI development and deployment and the NVIDIA NVLINK Switch System for direct communication between up to 256 GPUs, the H100 accelerates everything from exascale workloads with a dedicated transformer engine for language models with trillions of parameters to right-sized multi-instance GPU (MIG) partitions.
Try Supermicro systems with H100 GPUs and experience an end-to-end platform for AI and HPC data centres!

Try before you buy!
Test Supermicro systems with H100 GPUs and experience unprecedented performance, scalability and security for any data centre.
Specification of the test environment
In cooperation with NVIDIA and Supermicro, we offer the NVIDIA H100 PCIe GPU Test Drive Programme as an Elite Partner. Experience what the H100 can do in a PCIe 5.0 environment with fast memory free of charge and without obligation in our test centre. Interested in a test?
Data
Model designation | A+ Server AS-4125GS-TNRT |
CPU | 2x 9654 Dual Socket SP5 (AMD EPYC 9004 Series) Total 192 cores | 384 threads |
TDP | 350W per CPU |
RAM | 768GB RAM DDR5-4800 MHz |
GPUs | 2x NVIDIA H100 Tensor Core GPU 80GB |
NVMe | 1x 1.9TB M.2 Gen 4.0 NVMe SSD 2x 3.84TB U.2 Gen 4.0 NVMe SSD |
Network | 2x 10GbE RJ45 |
IPMI | IPMI 2.0 over LAN and KVM-over-LAN support |
Form factor | 4 height units |
Application areas | AI / Deep Learning | High Performance Computing |
Power connection | 4x 2200W redundant power supplies |

NVIDIA H100 PCIE
Unrivalled performance, scalability and security
Security for mainstream servers
The NVIDIA H100 Tensor Core GPU enables a giant leap in large-scale AI performance, scalability and security for any data centre and includes the NVIDIA AI Enterprise software suite to streamline AI development and deployment. With the NVIDIA NVLINK switch system for direct communication between up to 256 GPUs, the H100 accelerates workloads at exascale with a dedicated transformer engine for language models with trillions of parameters. For small tasks, the H100 can be partitioned into right-side Multi-Instance GPU (MIG) partitions. With Hopper Confidential Computing, this scalable computing power can secure sensitive applications in a shared data centre infrastructure. The NVIDIA AI Enterprise Software Suite shortens development time and simplifies deployment of AI workloads, making the H100 the most powerful end-to-end AI and HPC data centre platform.

HIGHEST AI AND HPC MAINSTREAM PERFORMANCE
3.2PF FP8 (5X) | 1.6PF FP16 (2.5X) | 800TF TF32 (2.5X) | 48TF FP64 (2.5X)
6X faster Dynamic Programming with DPX Instructions
2TB/s , 80GB HBM2e memory
HIGHEST COMPUTE ENERGY EFFICIENCY
Configurable TDP - 150W to 350W
2 Slot FHFL mainstream form factor
HIGHEST UTILISATION EFFICIENCY AND SECURITY
7 Fully isolated & secured instances, guaranteed QoS
2 nd Gen MIG | Confidential Computing
HIGHEST PERFORMING SERVER CONNECTIVITY
128GB/s PCI Gen5
600 GB/s GPU-2-GPU connectivity (5X PCIe Gen5)
up to 2 GPUs with NVLink Bridge
Enquiry NVIDIA H100 GPU Testdrive
Thank you for your interest in a free NVIDIA H100 PCI-e GPU Test Drive.
Do you need further information, a personal consultation or an offer? Please send us an email using our contact form! We look forward to hearing from you and will get back to you as soon as possible.