VIRTUALIzATION SOFTWARE ON NVIDIA GRAPHICS PROCESSORS
NVIDIA graphics processors use NVIDIA virtual graphics processor (vGPU) software.
Find the right graphics processor for your needs below.
Find the right graphics processor for your needs below.
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | |
---|---|---|---|---|---|---|
GPUs | A100 | A30 | A40 | A16 | A10 | A2 |
Graphics processor- architecture | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere |
Memory size | 80 GB HBM2 | 24 GB HMB2 | 48 GB GDDR6 with ECC | 64 GB GDDR6 (16 GB pro Grafikprozessor) | 24 GB GDDR6 | 16 GB GDDR6 |
Virtualization Workload | Highest performance virtualised computing, including AI, HPC and data processing, including support for up to 7 MIG instances. Upgrade path for V100/V100S Tensor Core GPUs. | Virtualise mainstream computing and AI inference and support up to 4 MIG instances. | NVIDIA RTX® Virtual Workstation (vDWS) enables advanced, high-quality 3D design and creative workflows. Upgrade path for Quadro RTX™ 8000, RTX 6000. | Office productivity apps, streaming video and teleconferencing tools for graphics-rich virtual desktops you can access from anywhere. Upgrade path for M10. | Office productivity apps, streaming video and teleconferencing tools for graphics-rich virtual desktops you can access from anywhere. | Entry-level inference with low power. small footprint and high performance for intelligent video analytics (IVA) with NVIDIA AI in the Edge. 7 times more inference power |
vGPU software support | NVIDIA Virtual Compute Server (vCS) | NVIDIA vCS | NVIDIA RTX vWS, NVIDIA Virtual PC (vPC), NVIDIA Virtual Apps (vApps), vCS | NVIDIA RTX vWS, vPC, vApps, vCS | vPC/vApps, vCS, vWs | vPC/vApps, vCS, vWs, NVIDIA AI Enterprise |
![]() | ![]() | ![]() | ![]() | ![]() | |
---|---|---|---|---|---|
GPU | A6000 | A5000 | A4500 | A4000 | A2000 |
Graphics processor- architecture | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere | NVIDIA Ampere |
Memory size | 48 GB GDDR6 | 24 GB GDDR6 | 20 GB GDDR6 | 16 GB GDDR6 | 6 GB GDDR6 oder 12 GB GDDR6 |
Virtualization Workload | Ultra-high-performance rendering, simulation and 3D design with NVIDIA vWS. AI, Deep Learning and Data Science with NVIDIA vCS. | Work with the largest and most complex RTX-enabled rendering, 3D design and creative applications with NVIDIA vWS. | Work with the largest and most complex RTX-enabled rendering, 3D design and creative applications with NVIDIA vWS. | RTX applications and NVIDIA vWS can be used to develop advanced, high-quality 3D designs and creative workflows. | Small and power-efficient graphics card for compact workstations, including tensor processing cores for ray tracing and AI acceleration. |
vGPU software support | vWS, vPC, vApp, vCS | vWS, vPC, vApp, vCS | vWS, vPC, vApp, vCS | vWS, vPC, vApp, vCS | vWS, vPC, vApps, vCS |
ACCELERATED COMPUTING POWER, VIRTUALISED
AI, deep learning and data science require unprecedented computing power. NVIDIA's Virtual Compute Server (vCS) enables data centres to accelerate server virtualisation with the latest NVIDIA data centre GPUs, such as the NVIDIA A100 and A30 Tensor Core GPUs, so that even the most compute-intensive workloads, such as artificial intelligence, deep learning and data science, can run on a virtual machine (VM) with NVIDIA vGPU technology. This is not a small step for virtualisation, but a big leap.
SCALED FOR MAXIMUM EFFICIENCY
NVIDIA Virtual GPUs give you near bare-metal performance in a virtualised environment, maximum utilisation, management and monitoring in a hypervisor-based virtualisation environment for GPU-accelerated AI.
PERFORMANCE SCALING FOR DEEP LEARNING TRAINING WITH VCS ON NVIDIA A100 TENSOR CORE GPUS
Developers, data scientists, researchers and students need massive computing power for deep learning training. Our A100 Tensor Core GPU speeds up the work, enabling more to be achieved faster. NVIDIA software, the Virtual Compute Server, provides nearly the same performance as bare metal, even when scaled to large deep learning training models that use multiple GPUs.


DEEP LEARNING INFERENCE THROUGHPUT WITH MIG ON NVIDIA A100 TENSOR CORE GPUS WITH VCS
Multi-Instance GPU (MIG) is a technology found only on the NVIDIA A100 Tensor Core GPU that partitions the A100 GPU into up to seven instances, each fully isolated with its own high-bandwidth memory, cache and cores. MIG can be used with Virtual Compute Server and provides one VM per MIG instance. Performance is consistent whether running an inference workload across multiple MIG instances on bare metal or virtualised with vCS.
RESOURCES FOR IT MANAGERS
Learn how NVIDIA Virtual Compute Server maximises performance and simplifies IT management.

OPTIMISATION OF USE
Fully utilise valuable GPU resources by seamlessly splitting GPUs for simpler workloads like inference, or deploying multiple virtual GPUs for more compute-intensive workloads like deep learning training.

MANAGEABILITY AND MONITORING
Ensure the availability and readiness of the systems that data scientists and researchers need. Monitor GPU performance at guest, host and application levels. You can even use management tools like suspend/resume and live migration. Learn more about the operational benefits of GPU virtualisation.