
10U GPU-Server inkl. NVIDIA HGX B200 8-GPU SXM
SYS-A22GA-NBR
Total GPU-Memory size of 1,440GB HBM3e
2x 100GbE QSFP56
2x 10GbE RJ-45
1x Dedicated IPMI Management
Massive data sets, huge deep learning models and complex simulations require multiple GPUs with extremely fast connections and a fully accelerated software stack. The NVIDIA HGX™ platform combines powerful GPUs with fast NVLink and InfiniBand interconnects and an optimised software stack from the NVIDIA NGC catalogue. This enables maximum performance for AI training, simulations and data-intensive analyses. Thanks to its end-to-end performance and flexibility, NVIDIA HGX enables researchers and scientists to combine simulations, data analyses and AI to drive scientific progress.
Large AI models in areas such as language processing, generative AI, autonomous driving and robotics require a scalable computing infrastructure. Platforms such as NVIDIA HGX are designed to efficiently bundle parallel GPU performance - for training complex models with large amounts of data in a reduced time frame. High-performance connections between GPUs and the direct networking of several nodes enable high-performance processing of even very extensive parameter structures.
Research institutions and engineering teams are increasingly relying on GPU-accelerated computing power to carry out complex simulations and data-intensive analyses more efficiently. Applications range from computational fluid dynamics and molecular simulation to genome research and drug discovery.
The systems from sysGen in co-operation with Supermicro are based on the NVIDIA HGX platform, support flexible configurations and can be scaled for rack operation.
To shorten the time to discovery for scientists, researchers and engineers, more and more HPC workloads are being augmented with machine learning algorithms and GPU-accelerated parallel computing. Many of the world's fastest supercomputing clusters now utilise GPUs and the power of AI.
HPC workloads typically require data-intensive simulations and analyses with massive data sets and precision requirements. GPUs like NVIDIA's H100/H200 offer unprecedented performance with double the precision, delivering 60 teraflops per GPU.
Our Supermicro's highly flexible HPC platforms enable high GPU and CPU counts in various dense form factors with rack-scale integration and liquid cooling.
Are you interested in our solutions or do you have further questions? Contact us now and find out more about the most powerful end-to-end platform for AI supercomputing. We will be happy to advise you and find the perfect solution for your requirements.
The NVIDIA HGX platform combines scalable GPU performance with optimised system architecture for data-intensive workloads. Thanks to direct GPU connections (NVLink, NVSwitch) and a customised software stack, complex AI models and simulations can be processed efficiently.
HGX-based systems are used in areas such as deep learning, engineering simulation, scientific research, genomics and drug development. They are particularly suitable for computationally intensive, parallelised applications with large amounts of data.
The platform integrates advanced GPU architectures from NVIDIA with direct GPU-GPU communication via NVLink and NVSwitch as well as fast network connections such as InfiniBand. This is complemented by management tools, optimised drivers and support for frameworks from the NVIDIA NGC catalogue. These include NVIDIA HGX H100, NVIDIA HGX H200, NVIDIA HGX B200 and other current models.
HGX systems have a modular design and can be integrated into standard racks. sysGen offers scalable solutions - air or liquid cooled - with optional support for GPU clusters, storage connection and management software. Integration is customised to existing IT environments.