Systeme und Informatikanwendungen Nikisch GmbHsysGen GmbH - Am Hallacker 48a - 28327 Bremen - info@sysgen.de

Welcome to the new website of sysGen. Please use our contact form if you have any questions about our content

.
Due to the widening chip crisis and the resulting, significant price increases of the major IT manufacturers, online price calculations are currently not possible. We therefore point out that price inquiries via our website may differ from the final offer!

ACCELERATED COMPUTING POWER, VIRTUALISED

AI, deep learning and data science require unprecedented computing power. NVIDIA's Virtual Compute Server (vCS) enables data centres to accelerate server virtualisation with the latest NVIDIA data centre GPUs, such as the NVIDIA A100 and A30 Tensor Core GPUs, so that even the most compute-intensive workloads, such as artificial intelligence, deep learning and data science, can run on a virtual machine (VM) with NVIDIA vGPU technology. This is not a small step for virtualisation, but a big leap.


Show document: vCS solution overview (PDF 280 KB)

FEATURES

GPU-Sharing

GPU sharing (fractional) is only possible with NVIDIA vGPU technology. It allows multiple VMs to share a GPU and maximise utilisation for lighter workloads that require GPU acceleration.

MORE INFOS

GPU-Aggregation

With GPU aggregation, a VM can access multiple GPUs, which is often required for compute-intensive workloads. vCS supports both multi-vGPU and peer-to-peer computing. In multi-vGPU, the GPUs are not directly connected. In peer-to-peer, they are connected through NVLink to achieve higher bandwidth.

MORE INFOS

MANAGEMENT AND MONITORING

vCS supports monitoring at app, guest and host level. In addition, proactive management features provide the ability to perform live migration, pause and resume and create thresholds. These thresholds show consumption trends that impact the user experience. All of this is possible in the vGPU Management SDK.

MORE INFOS

NGC

NVIDIA GPU Cloud (NGC) is a hub for GPU-optimised software that simplifies workflows for deep learning, machine learning and HPC, and now supports virtualised environments with NVIDIA vCS.

MORE INFOS

Peer-to-Peer-Computing

NVIDIA® NVLink™ is a fast, direct GPU-to-GPU connection that provides higher bandwidth, more connections and improved scalability for system configurations with multiple GPUs - now virtually supported with NVIDIA Virtual GPU (vGPU) technology.

MORE INFOS

ECC und Page Retirement

Error Correction Code (ECC) and Page Retirement provide increased reliability for computing applications that are vulnerable to data corruption. They are particularly important in large cluster computing environments where GPUs process very large data sets and/or run applications for long periods of time.

MORE INFOS

MULTI-INSTANCE-
GRAPHICS PROCESSOR  (MIG)

Multi-instance GPUs (MIG) represent a revolutionary technology that can extend the capabilities of the data centre, allowing each NVIDIA A100 Tensor Core GPU to be split into up to seven fully isolated instances and backed up at the hardware level with dedicated memory, cache and high-bandwidth compute cores. With vCS software, a VM can run on each of these MIG instances, allowing organisations to take advantage of the management, monitoring and operational benefits of hypervisor-based server virtualisation.

MORE INFOS

GPUDirect

GPUDirect® Remote Direct Memory Access (RDMA) allows network devices to access GPU memory directly, bypassing the CPU's host memory, reducing GPU communication latency and fully offloading the CPU.

MORE INFOS

SCALED FOR MAXIMUM EFFICIENCY

NVIDIA Virtual GPUs give you near bare-metal performance in a virtualised environment, maximum utilisation, management and monitoring in a hypervisor-based virtualisation environment for GPU-accelerated AI.

PERFORMANCE SCALING FOR DEEP LEARNING TRAINING WITH VCS ON NVIDIA A100 TENSOR CORE GPUS

Developers, data scientists, researchers and students need massive computing power for deep learning training. Our A100 Tensor Core GPU speeds up the work, enabling more to be achieved faster. NVIDIA software, Virtual Compute Server, provides nearly the same performance as bare metal, even when scaled to large deep learning training models that use multiple GPUs.

DEEP LEARNING INFERENCE THROUGHPUT WITH MIG ON NVIDIA A100 TENSOR CORE GPUS WITH VCS

Multi-Instance GPU (MIG) is a technology found only on the NVIDIA A100 Tensor Core GPU that partitions the A100 GPU into up to seven instances, each fully isolated with its own high-bandwidth memory, cache and cores. MIG can be used with Virtual Compute Server and provides one VM per MIG instance.  Performance is consistent whether running an inference workload across multiple MIG instances on bare metal or virtualised with vCS.

RESOURCES FOR IT MANAGERS

Learn how NVIDIA Virtual Compute Server maximises performance and simplifies IT management.

OPTIMISATION OF USE

Fully utilise valuable GPU resources by seamlessly splitting GPUs for simpler workloads like inference, or deploying multiple virtual GPUs for more compute-intensive workloads like deep learning training.

MANAGEABILITY AND MONITORING

Ensure the availability and readiness of the systems that data scientists and researchers need. Monitor GPU performance at guest, host and application levels. You can even use management tools like suspend/resume and live migration. Learn more about the operational benefits of GPU virtualisation.

Discover advantages for IT

GRAPHICS PROCESSOR RECOMMENDATIONS

  NVIDIA A100 NVIDIA V100S NVIDIA A40 NVIDIA RTX 8000 NVIDIA RTX 6000
Memory 40 GB HBM2 32 GB HBM2 48 GB GDDR6 48 GB GDDR6 24 GB GDDR6
Top-FP 32 19,5 TFLOPS 16,4 TFLOPS 38,1 TFLOPS 14,9 TFLOPS 14,9 TFLOPS
Top-FP 64 9,7 TFLOPS 8,2 TFLOPS - - -
NVLink: Number of graphics processors per VM Up to 8 Up to 8 2 2 2
ECC und Page Retirement
Multi-vGPU per VM1 Up to 16 Up to 16 Up to 16 Up to 16 Up to 16

PARTNER IN VIRTUALISATION