
NVIDIA DGX H200 1128GB The Universal System for AI Infrastructure
8x 3.84TB NVMe U.2
2x dual-port NVIDIA ConnectX-7 VPI
1x 400Gb/s InfiniBand/Ethernet
1x 200Gb/s InifniBand/Ethernet
NVIDIA DGX systems provide a specialised infrastructure for AI development that includes the following benefits:
Supermicro's HGX platform also offers impressive benefits for AI applications:
Our high-performance storage solutions offer high capacity and speed to efficiently store and retrieve large amounts of data. Perfect for the requirements of big data and machine learning.
Our workstations are specially developed to fulfil the demanding requirements of AI training. They offer outstanding computing power. Perfect for research and development, our workstations support a wide range of AI applications. With state-of-the-art hardware and flexible configuration options, we ensure that your AI projects can be successfully realised.
Our inference hardware offers the necessary computing capacity to execute AI models in real time. Ideal for applications such as autonomous vehicles, image processing and voice control.
Our inference hardware offers the necessary computing capacity to execute AI models in real time. Ideal for applications such as autonomous vehicles, image processing and voice control.
NVIDIA DGX systems are specially designed for deep learning training. They combine powerful GPUs, such as the NVIDIA A100 and H100, which are designed for extensive parallel data processing. This makes them particularly efficient for training complex AI models with large amounts of data.
Supermicro HGX platforms offer customised solutions for AI training with high-density GPU integration and excellent cooling to handle the thermal requirements of powerful GPUs. These systems are designed to minimise latency and maximise computing power for intensive AI training.
When selecting an AI training server, the specific requirements of the AI model, such as data volume, model complexity and desired training time, should be taken into account. In addition, factors such as system scalability, GPU performance and the possibility of integration into existing infrastructures are crucial.
Yes, both NVIDIA DGX and Supermicro HGX platforms support scalable architectures that allow the addition of more units to keep pace with the demands of growing AI projects. This makes it easier to train larger models or run multiple trainings in parallel.
The hardware, in particular the type and number of GPUs as well as the network and storage infrastructure, plays a decisive role in the training speed. High-quality GPUs such as those in DGX and HGX systems can significantly reduce training times by enabling fast data processing and efficient parallelisation.