NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software optimized, certified, and supported by NVIDIA to run on VMware vSphere with NVIDIA-Certified Systems™. It includes key NVIDIA technologies for rapidly deploying, managing and scaling AI workloads in the modern hybrid cloud.

VMWARE + NVIDIA KI-ENABLED PLATFORM

MEET THE NVIDIA EGX PLATFORM

One architecture. For every enterprise workload. Discover the platform that unifies the data center and
​​​​​​​brings accelerated computing to every enterprise.
MORE INFO

ADVANTAGES

OPTIMIZED FOR PERFORMANCE

Achieve near bare-metal performance across multiple nodes to enable large, complex training and machine learning workloads.

CERTIFIED FOR RED HAT AND VMWARE VSPHERE

Reduce deployment risk with a full suite of NVIDIA AI software certified for the VMware and Red Hat data center.

NVIDIA ENTERPRISE SUPPORT

Ensure mission-critical AI projects stay on track with enterprise-level support from NVIDIA.

FLEXIBILITY THROUGH RED HAT OPENSHIFT

Diese gemeinsame Lösung von NVIDIA und Red Hats gibt Datenwissenschaftlern die Flexibilität, ML-Tools in Containern zu verwenden, um ihre ML-Modellierungsergebnisse schnell zu erstellen, zu skalieren, zu reproduzieren und zu teilen.

KI READY FOR ANY WORKLOAD

Unterstützung für Deep Learning-Frameworks und Data Science-Container auf reinen CPU-Systemen. Jetzt können Unternehmen KI auf GPU-beschleunigten NVIDIA-zertifizierten Servern oder auf denselben Servermodellen ohne GPUs ausführen. So können Sie KI auf einer bestehenden Infrastruktur einsetzen und sich dabei von den NVIDIA KI-Experten unterstützen lassen.

SUPPORT FOR NEW NVIDIA HARDWARE

Support for new NVIDIA hardware, including the NVIDIA A100X and A30X converged accelerators that enable faster, more efficient, and more secure AI systems, and the NVIDIA A2 GPU for space-constrained environments.

NEW UPDATED NVIDIA AI CONTAINERS

NVIDIA TAO Toolkit and the updated Triton Inference Server with FIL, further streamline AI development and deployment.  The NVIDIA TAO Toolkit accelerates AI development by 10x without requiring AI expertise.  The updated NVIDIA Triton Inference Server now supports a FIL (Forest Inference Library) backend that provides the best inference performance for both neural networks and tree-based models on GPUs, enabling simplified deployment of large tree models on GPUs with low latency and high accuracy. Examples of the use of tree-based models include fraud detection, sales forecasting, product defect prediction, pricing, recommender systems, and call center routing, to name a few.

DEPLOYMENT OF KI IN THE CLOUD

Support for NVIDIA AI Enterprise based Virtual Machine Images (VMIs) for use in Cloud Service Provider (CSP) infrastructure. Customers who have purchased NVIDIA AI Enterprise software can now deploy to specific NVIDIA GPU-accelerated cloud instances in AWS, Azure, or GCP instances with full support from NVIDIA.
Terminology
NOTICE
MEANING
Support Services
Includes technical support, upgrade and maintenance.
Unlimited license
A non-expiring, permanent software license that can be used in perpetuity without the need to renew. Support Services are required and are available in three- or five-year increments. One-year support services are available for renewals only. 
Subscription
A software license that is active for a specified period of time defined by the terms of the subscription. A subscription includes support services for the duration of the subscription term.
License server
An application that manages licensing and is installed on a physical or virtual server. 
GPU
Graphics processing unit
CPU
The central processing unit (CPU) of a computer is the part of the computer that retrieves and executes instructions.
CPU socket licensing
(1) For on-premise deployments, the number of physical processors in the computing environment on which NVIDIA AI Enterprise is installed, or (2) in a cloud computing environment, the number of virtual CPUs for the computing instance on which NVIDIA AI Enterprise is running. NVIDIA requires one license per CPU socket.
NVIDIA Virtual GPU Software Licensed Products
PRODUCT
DESCRIPTION
Virtuelle NVIDIA-Anwendungen (vApps)l
For organizations using Citrix virtual apps and desktops,
RDSH or other app streaming or session-based solutions.
Designed for PC-level applications and server-based desktops.
NVIDIA Virtueller PC (vPC)
For users who want a virtual desktop but want a great
PC Windows applications, browsers and high-definition
high-definition video.
NVIDIA RTX™ Virtual Workstation (vWS)
For users who want to be able to run professional remote
graphics applications with full performance on any device,
anywhere
NVIDIA Virtual Compute Server (vCS)
For compute-intensive server workloads, such as artificial
intelligence (AI), deep learning, or high-performance computing
(HPC).
Supported NVIDIA GPUs optimized for compute loads
NVIDIA HGX A100
NVIDIA A100
NVIDIA A30
Recommended use case
Optimized calculations
Optimized calculations
Optimized calculations
Number of GPUs
4 NVIDIA A100/ 8 NVIDIA A100
1 NVIDIA A100
1 NVIDIA A100
FP32 Cores / GPU
6,912
6,912
3584
Tensor-Cores / GPU
RT-Cores
432
432
224
RT-Cores
-
-
-
Total memory size / GPU
40 GB HBM2/80GB HBM2
40 GB HBM2/80GB HBM2
24 GB HBM2
MIG-Instanzen/GPU
7
7
4
Max GPU Power / GPU
400 W
250W/300W
165 W
Form factor
4x SXM4 GPUs/8x SXM4
PCIe 4.0 Dual-Slot FHFL
PCIe 4.0 Dual-Slot FHFL
Card dimensions
-
10.5” × 4.4”
10.5” × 4.4”
Cooling solution
Passive
Passive
Passive
Our products
HGX A100 - SERVER SYSTEMSA100 - SERVER SYSTEMSNVIDIA A30 GPU
Supported NVIDIA GPUs optimized for mixed workloads
NVIDIA HGX 100
NVIDIA A10
NVIDIA T4
Recommended use case
Optimized calculations
NVIDIA vWS – Performance Optimized (midrange) vCS – Compute Optimized
NVIDIA vWS - Performance Optimized (Entry); vPC - Density Optimized; vCS – Compute Optimized
Number of GPUs
4 NVIDIA A100/ 8 NVIDIA A100
1 NVIDIA A10
1 NVIDIA Turing ™ TU104
FP32 Cores / GPU
6,912
9,216
2,560
Tensor-Cores / GPU
RT-Cores
432
288
320​​​​​​​
RT-Cores
-
72
40
Total memory size / GPU
40 GB HBM2/80GB HBM2
24 GB GDDR6
16 GB GDDR6
Max GPU Power / GPU
400 W
150 W
70 W
Form factor
4x SXM4 GPUs/8x SXM4
PCIe 4.0 Single Slot FHFL
PCIe 3.0 Single Slot
Card dimensions
-
10.5” × 4.4”
2.7" × 6.6"​​​​​​​
Cooling solution
Passive
Passive
Passive
Our products
HGX A100 -
SERVER SYSTEMS
NVIDIA A10 GPUNVIDIA T4 GPU
General information on procurement
ELIGIBILITY
NVIDIA vGPU PRODUCTION SUMS
Maintenance
Access to all maintenance releases, bug fixes, and security
security patches for flexible upgrades in accordance with the NVIDIA
Virtual GPU Software Lifecycle Policy.
Upgrades
Access to all new major versions with feature enhancements.
Enhancements and new hardware support
Long-term maintenance of branch offices
Available for up to 3 years from general availability per the
NVIDIA Virtual GPU Software Lifecycle Policy
Direct support
Direct access to NVIDIA support for timely resolution of
of customer specific issues
Support availability
Customer support available during normal business hours Cases
24 × 7 accepted 
Access to the knowledgebase

Web support
✓​​​​​​​
E-Mail support
​​​​​​​
Phone support
✓​​​​​​​

SPEED UP YOUR KI JOURNEY WITH NVIDIA LAUNCHPAD

Get instant access to NVIDIA AI with free curated labs - from AI-powered chatbots
with Triton Inference Server to image classification with TensorFlow and much more.