NVIDIA EGX converged accelerators are part of the NVIDIA EGX™ AI platform and
combine the powerful performance of the NVIDIA Ampere architecture with the enhanced security and latency reduction capabilities of the NVIDIA® BlueField®-2 data processing unit (DPU). EGX converged accelerators enable enterprises to create faster, more efficient, and more secure AI systems in data centers and in the periphery.

Unprecedented GPU performance

For a wide range of compute-intensive workloads, the NVIDIA Ampere architecture delivers the largest generational leap ever to further secure and accelerate enterprise and peripheral infrastructure.

Safety without compromise

NVIDIA BlueField-2 delivers innovative acceleration, security, and efficiency for any host. BlueField-2 combines the power ofNVIDIA ConnectX®-6 Dxwith programmable arm cores and hardware offloads for software-defined storage, networking, security, and management workloads.

Faster data speeds

NVIDIA converged accelerators offer an integrated PCIe Gen4 switch. This allows data to be transferred between the GPU and DPU without traversing the server PCIe bus. Even in systems with PCIe Gen3, communication occurs at full PCIe Gen4 speed. This enables new levels of data center efficiency and security for GPU-accelerated workloads, including AI, data analytics,5G telecom, and other edge applications.


Unparalleled performance for GPU-powered, IO-intensive workloads.


NVIDIA H100 CNX combines the performance of the NVIDIA H100 with the advanced networking capabilities of the NVIDIA ConnectX®-7 Smart Network Interface Card (SmartNIC) in a single, unique platform. This convergence delivers unprecedented performance for GPU-based input/output (IO)-intensive workloads, such as distributed AI training in the enterprise data center and 5G processing at the edge.


NVIDIA H100 and ConnectX-7 connect via an integrated PCIe Gen5 switch that provides a dedicated high-speed path for data transfers between GPU and network. This eliminates bottlenecks in data passing through the host and enables low, predictable latency, which is important for time-critical applications such as 5G signal processing.


Integrating a GPU and a SmartNIC into a single device inherently creates a balanced architecture. In systems where multiple GPUs and DPUs are desired, a converged accelerator card enforces the optimal one-to-one ratio of GPU to NIC. The design also avoids contention on the server's PCIe bus, so performance scales linearly with additional devices.


Because the GPU and SmartNIC are directly connected, customers can use mainstream PCIe Gen4 or even Gen3 servers to achieve performance levels only possible with high-end or purpose-built systems. Using a single card also saves power, space, and PCIe device slots, and enables further cost savings by allowing a higher number of accelerators to be used per server.


Central software acceleration libraries such as theNVIDIA Collective Communications Library (NCCL) and Unified Communication X (UCX®) automatically use the highest performance path for data transfers to GPUs. This allows existing multi-node accelerated applications to take advantage of H100 CNX without modification, resulting in immediate improvements.


Technical data
GPU memory
80 GB HBM2e
Memory bandwidth
> 2.0 Tb/s
MIG instances
7 instances with 10 GB each3
instances with 20 GB each2 instances


GB each
PCIe Gen5 128 GB/s
NVLINK bridge
1x 400 Gb/s, 2x 200 Gb/s ports, Ethernet or InfiniBand
Form factor
FHFL Dual Slot (Full Height, Full Length)
Max. Power
350 W



Integrating the GPU,DPU, and PCIe switch into a single device inherently creates a balanced architecture. In systems where multiple GPUs and DPUs are desired, a converged accelerator card avoids contention on the server's PCIe bus, so performance scales linearly with additional devices. Converged cards also make performance much more predictable. Offloading these components to a physical card also improves space requirements and energy efficiency. Converged cards greatly simplify deployment and ongoing maintenance, especially when installed in mass-oriented servers at scale.

Most powerful networking

With NVIDIA converged accelerators, enterprises benefit from the DPU's networking capabilities when creating a scalable infrastructure for modern applications. Modern workloads and data center designs traditionally result in significant overhead for networking tasks on CPU compute units. ButNVIDIA SmartNIC, a core component of the NVIDIA DPU, provides network security offload capabilities, including Transport Layer Security (TLS) and Internet Protocol Security (IPSec), which offloads the CPU. SmartNIC can also inspect network traffic and block malicious activity, providing enhanced security. NVIDIA DPU technology also enables converged NVIDIA GPUs to handle virtual networks faster and more efficiently.

Accelerating AI in the Edge via 5G

NVIDIA AI-on-5G consists of the NVIDIA EGX™ hyper-converged computing platform, the NVIDIA Aerial™ SDK for software-defined 5G virtual wireless networks (vRANs), and enterprise AI applications, including SDKs such as NVIDIA Isaac™ and NVIDIA Metropolis™. This solution can be deployed locally and managed by enterprises or managed by hyperscalers such as Google Cloud, simplifying the deployment of AI applications over 5G edge networks.


Improved security

The convergence of NVIDIA's GPU and DPU creates a more secure AI processing engine, where data generated in the Edge can be sent over the network fully encrypted without traversing the server PCIe bus. This ensures that they are protected end-to-end. With the NVIDIA DOCA SDK, you can easily create security and network services for the BlueField 2 DPU and leverage DPU hardware accelerators and CPU programmability to improve application performance and security.


NVIDIA's Converged Accelerator Portfolio

This device enables data-intensive workloads to run at the edge and in the
data center with maximum security and performance.

BlueField-2 & A100 Tensor Core GPU in interaction

The BlueField-2 A100 combines the power of the NVIDIA A100 tensor-core GPU with the BlueField-2 DPU, delivering unprecedented acceleration and flexibility for the world's most powerful data centers. With multi-instance GPU (MIG), each A100 can be split into up to seven GPU instances, providing GPU acceleration at the appropriate scale for optimal workloads to be shared by all users and applications.
the BlueField-2 A100 is designed for AI training, data analytics, and 5G telecom workloads in data centers that can benefit from high-speed communications between the GPU and network with guaranteed bandwidth, especially for large multi-node workloads.