
AI Server products


4U GPU-Server inkl. NVIDIA HGX H100 4-GPU SXM5
SYS-421GU-TNXR
2x 10GbE RJ-45
1x Dedicated IPMI Management

8U GPU-Server inkl. NVIDIA HGX H100 8-GPU SXM5
SYS-8125GS-TNHR
1x Dedicated IPMI Management

8U GPU-Server inkl. NVIDIA HGX H200 8-GPU SXM5
SYS-821GE-TNHR
1x Dedicated IPMI Management

19" GPU-Rack-System inkl. 20x NVIDIA GH200
SYS-SRS-8125GS-DCLC-02
1x 48-port 1/25G Management-Switch
Inkl. Verkabelung von NDR Infiniband
Inkl. Verkabelung von 10Gb/1Gb LAN

NVIDIA DGX B200 1.4 TB The Universal System for AI Infrastructure

NVIDIA DGX Spark A Grace Blackwell AI supercomputer on your desk

NVIDIA DGX B300 2.3 TB The AI factory foundation for AI reasoning

10U GPU-Server inkl. NVIDIA HGX B200 8-GPU SXM
SYS-A22GA-NBR
Total GPU-Memory size of 1,440GB HBM3e
2x 100GbE QSFP56
2x 10GbE RJ-45
1x Dedicated IPMI Management

10U GPU-Server inkl. NVIDIA HGX B200 8-GPU SXM
SYS-A126GS-TNB
Total GPU-Memory size of 1,440GB HBM3
2x 100GbE QSFP56
2x 10GbE RJ-45
1x Dedicated IPMI Management

5U GPU-Server inkl. 8x NVIDIA H200 NVL (141GB)
SYS-5126GS-TNRT
1x Dedicated IPMI Management

5U GPU-Server inkl. 8x NVIDIA RTX PRO 6000 Blackwell Server Edition
SYS-5126GS-TNRT
1x Dedicated IPMI Management

2U GPU-Server inkl. 2x NVIDIA RTX PRO 6000 Blackwell Server Edition
SYS-212GB-NR
1x Dedicated IPMI Management

2U GPU-Server inkl. 2x NVIDIA H200 NVL (141GB)
SYS-212GB-NR
1x Dedicated IPMI Management

3U GPU-Server inkl. 8x NVIDIA RTX PRO 6000 Blackwell Server Edition
SYS-322GA-NR
1x Dedicated IPMI Management

3U GPU-Server inkl. 8x NVIDIA H200 NVL (141GB)
SYS-322GA-NR
1x Dedicated IPMI Management

NVIDIA DGX systems provide a specialised infrastructure for AI development that includes the following benefits:
- Integrated high-performance GPUs: NVIDIA H200 and B200 GPUs provide unrivalled computing power for demanding AI tasks.
- Optimised software: NVIDIA AI Enterprise including Base Command facilitates the management and scaling of AI applications, increasing efficiency.
- Versatility: DGX systems are suitable for various AI workloads, including training, inference and analytics.
- Expandable architecture: The ability to combine multiple DGX systems into a cluster makes it possible to support AI projects of any scale.

Supermicro's HGX platform also offers impressive benefits for AI applications:
- Multi-GPU support: optimised for parallel training of large AI models, increasing processing capacity and reducing training times.
- Scalable solutions: From single server configurations to extensive clusters tailored to specific customer needs.
- High energy efficiency: Designed to minimise operating costs while maximising computing power.
- Extensive compatibility: Supports a wide range of operating systems and applications, facilitating integration into existing systems.
Do you need more? Matching hardware for AI training
Our high-performance storage solutions offer high capacity and speed to efficiently store and retrieve large amounts of data. Perfect for the requirements of big data and machine learning.
Our workstations are specially developed to fulfil the demanding requirements of AI training. They offer outstanding computing power. Perfect for research and development, our workstations support a wide range of AI applications. With state-of-the-art hardware and flexible configuration options, we ensure that your AI projects can be successfully realised.
Our inference hardware offers the necessary computing capacity to execute AI models in real time. Ideal for applications such as autonomous vehicles, image processing and voice control.
Our inference hardware offers the necessary computing capacity to execute AI models in real time. Ideal for applications such as autonomous vehicles, image processing and voice control.
- How are NVIDIA DGX systems optimised for AI training?
NVIDIA DGX systems are specially designed for deep learning training. They combine powerful GPUs, such as the NVIDIA A100 and H100, which are designed for extensive parallel data processing. This makes them particularly efficient for training complex AI models with large amounts of data.
- What are the benefits of Supermicro HGX platforms for AI training?
Supermicro HGX platforms offer customised solutions for AI training with high-density GPU integration and excellent cooling to handle the thermal requirements of powerful GPUs. These systems are designed to minimise latency and maximise computing power for intensive AI training.
- What should be considered when selecting an AI training server?
When selecting an AI training server, the specific requirements of the AI model, such as data volume, model complexity and desired training time, should be taken into account. In addition, factors such as system scalability, GPU performance and the possibility of integration into existing infrastructures are crucial.
- Can DGX and HGX servers be scaled for training tasks?
Yes, both NVIDIA DGX and Supermicro HGX platforms support scalable architectures that allow the addition of more units to keep pace with the demands of growing AI projects. This makes it easier to train larger models or run multiple trainings in parallel.
- How does the hardware affect the training speed?
The hardware, in particular the type and number of GPUs as well as the network and storage infrastructure, plays a decisive role in the training speed. High-quality GPUs such as those in DGX and HGX systems can significantly reduce training times by enabling fast data processing and efficient parallelisation.