GPU ACCELERATOR FOR DATA CENTERS
The main advantages of NVIDIA cards include specialized tensor cores for machine learning applications or large memory (up to 40 GB per accelerator) secured by ECC technology. To enable the accelerators to communicate with each other quickly, NVIDIA has connected them with a special interface with enormous data throughput - NVLink. NVLink achieves transfer speeds of up to 600 GB / s. In addition, the NVIDIA DGX A100 offers the super-strong NVSwitch. This provides a total throughput between eight NVIDIA amp A100 cards of up to 4.8 TB / s.
NVIDIA VIRTUAL GPU NEWS - GTC 2021
DATA CENTERS ARE ON THE RISE
The data center industry is in a constant state of evolution. From the shift to managed services, cloud and colocation, to the constant balancing act between hyperscale and edge, it's a demanding time when data center operators face new challenges on a regular basis. After all, a data center is only efficient if it can keep pace with innovations and changes in the market.
ONE GRAPHICS PROCESSOR FOR EVERY VIRTUAL WORKLOAD
NVIDIA AI ENTERPRISE
THE ROAD TO A SIMPLIFIED, ACCELERATED DATA CENTER
The solution to this problem is to develop an enterprise IT infrastructure where modern and traditional applications can run optimally on a common pool of resources. This allows IT to reduce operational costs by reducing the number of separate computing environments that need to be managed and lowering capital costs by consolidating workloads onto a smaller number of systems. In this way, companies can also prepare for the future, when the majority of applications will be hardware accelerated.
Certified exclusively on VMware vSphere 7
Enabling AI and data analytics on VMware vSphere
OPTIMISED FOR PERFORMANCE
Achieve near bare-metal performance across multiple nodes to run large, complex training and machine learning workloads.
CERTIFIED FOR VMWARE VSPHERE
Reduce deployment risks with a complete suite of NVIDIA AI software certified for the VMware datacenter.
NVIDIA ENTERPRISE SUPPORT
Keep mission-critical AI projects on track with access to NVIDIA experts.
A100, A30, A40, A10
Deploy & Manage,
End-to-end management with real-time insights.
VMs in Multi-Instance GPU (MIG)
Spatial and temporal partitioning, VM isolation, flexible sharing of GPU resources.
GPU DIRECT COMMUNICATION
Improved data transmission performance.
Unified Virtual Memory (UVM)
Simplified programming, improved performance
Develop, optimise, deploy applications
Simplified GPU management
- Up to 266-fold performance increase compared to pure CPU
- Almost bare metal performance
- GPU sharing without performance loss
Biggest generation leap, 20 times the performance of Volta.
Ampere-based architecture - 3rd generation Tensor cores, fast FP64s
Multi-instance GPU (MIG) - Up to 7 instances with 5 GB each
- Excellent power/performance at 250 W
- PCIe Gen 4
- 40GB HBM2
Versatile compute acceleration for mainstream enterprise servers.
Ampere-based architecture - 3rd generation Tensor cores, fast FP64
Multi-instance GPU (MIG) - Up to 4 instances with 6 GB each
- Excellent performance at 165 W
- PCIe Gen 4
- 24 GB HBM2