GPU ACCELERATOR FOR DATA CENTERS

NVIDIA Tesla and Ampere graphics accelerators are designed to accelerate HPC applications or deploy artificial intelligence and deep learning algorithms.

The main advantages of NVIDIA cards include specialized tensor cores for machine learning applications or large memory (up to 40 GB per accelerator) secured by ECC technology. To enable the accelerators to communicate with each other quickly, NVIDIA has connected them with a special interface with enormous data throughput - NVLink. NVLink achieves transfer speeds of up to 600 GB / s. In addition, the NVIDIA DGX A100 offers the super-strong NVSwitch. This provides a total throughput between eight NVIDIA amp A100 cards of up to 4.8 TB / s.

NVIDIA VIRTUAL GPU NEWS - GTC 2021

The GTC21 has again presented many innovations in the hardware and software area. The new accelerators like the A30, A40, A16 and DGX Station as well as outstanding software solutions like Omniverse and many more. You can read more about this in our blog posts.
GTC21 - What´s new, What´s changed?

DATA CENTERS ARE ON THE RISE

As businesses around the world become increasingly digitized, connected and agile, the value of eliminating silos in favor of more holistic strategies is now widely recognized and extends to nearly every aspect of the modern enterprise. From supply chains to procurement optimization. Agile strategies are enabling organizations to re-evaluate how they spend money and time on projects, bringing together interdisciplinary teams to work toward a common goal.

The data center industry is in a constant state of evolution. From the shift to managed services, cloud and colocation, to the constant balancing act between hyperscale and edge, it's a demanding time when data center operators face new challenges on a regular basis. After all, a data center is only efficient if it can keep pace with innovations and changes in the market.
You would like to learn more about sysGen's services of colocation, hosting and managed services in our Bremen Compute Center?
Zum sysGen "E-Center"

ONE GRAPHICS PROCESSOR FOR EVERY VIRTUAL WORKLOAD

Growing workloads drive need for specialized accelerators

NVIDIA AI ENTERPRISE

THE ROAD TO A SIMPLIFIED, ACCELERATED DATA CENTER

Today, most enterprise applications in the data center run on a shared resource pool managed by a virtualization or orchestration platform. This strategy maximizes the use of capital infrastructure and enables high operational efficiency. All other applications that cannot leverage this shared pool run in a silo where operational efficiency is fundamentally lower.

The solution to this problem is to develop an enterprise IT infrastructure where modern and traditional applications can run optimally on a common pool of resources.  This allows IT to reduce operational costs by reducing the number of separate computing environments that need to be managed and lowering capital costs by consolidating workloads onto a smaller number of systems.  In this way, companies can also prepare for the future, when the majority of applications will be hardware accelerated.

Certified exclusively on VMware vSphere 7

Enabling AI and data analytics on VMware vSphere

OPTIMISED FOR PERFORMANCE

Achieve near bare-metal performance across multiple nodes to run large, complex training and machine learning workloads.

CERTIFIED FOR VMWARE VSPHERE

Reduce deployment risks with a complete suite of NVIDIA AI software certified for the VMware datacenter.

NVIDIA ENTERPRISE SUPPORT

Keep mission-critical AI projects on track with access to NVIDIA experts.

NVIDIA Ampere architecture-based GPUs

A100, A30, A40, A10

Deploy & Manage,
End-to-end management with real-time insights. 

VMs in Multi-Instance GPU (MIG)

Spatial and temporal partitioning, VM isolation, flexible sharing of GPU resources.

GPU DIRECT COMMUNICATION

Improved data transmission performance.

Unified Virtual Memory (UVM)

Simplified programming, improved performance

CUDA Tools

Develop, optimise, deploy applications

GPU Operator

Simplified GPU management

Deployment of a typical AI workflow
Distributed Deep Learning Training - Multi-Node Scaling
Up to 266 times higher AI inference performance compared to CPU / bare metal performance
  • Up to 266-fold performance increase compared to pure CPU
  • Almost bare metal performance
  • GPU sharing without performance loss

PRODUCT COMPARISON

NVIDIA A100

Biggest generation leap, 20 times the performance of Volta.

Ampere-based architecture - 3rd generation Tensor cores, fast FP64s
Multi-instance GPU (MIG) - Up to 7 instances with 5 GB each

- Excellent power/performance at 250 W
- PCIe Gen 4
- 40GB HBM2

NVIDIA A30

Versatile compute acceleration for mainstream enterprise servers.

Ampere-based architecture - 3rd generation Tensor cores, fast FP64
Multi-instance GPU (MIG) - Up to 4 instances with 6 GB each

- Excellent performance at 165 W
- PCIe Gen 4
- 24 GB HBM2