NVIDIA Pascal Boost

NVIDIA - Higher Throuput, and Innovation

Every Data Center Should be Equipped

From scientific discovery to artificial intelligence, modern HPC data centers are solving some of the greatest challenges facing the world today in bioinformatics, drug discovery, and high-energy physics.

With traditional CPUs no longer delivering the same performance gains, the path forward for HPC data centers is GPU-accelerated computing. NVIDIA® Tesla® P100, powered by NVIDIA Pascal™ architecture, is the computational engine driving the AI revolution and enabling HPC breakthroughs.

3 Reasons to boost up with Pascal-Boost

Reason 1: Be Prepared for the AI Revolution

Reason 2: Top Applications are GPU-Accelerated

Reason 3: Boost Data Center Productivity & Throughput


See detailed

The World's Most Powerful Data Center GPU

HPC and hyperscale data centers need to support the ever-growing demands of data scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires vast interconnect overhead that substantially increases costs without proportionally increasing data center performance.

The NVIDIA Tesla P100 accelerator is the world’s most powerful data center GPU ever built, designed to boost throughput and save money for HPC and hyperscale data centers. Powered by the brand new NVIDIA Pascal™ architecture, Tesla P100 enables a single node to replace up to half-rack of commodity CPU nodes by delivering lightning-fast performance in a broad range of HPC applications.

Choose the Right NVIDIA® Tesla® Solution for You

PRODUCT DESIGNED FOR BENEFITS KEY FEATURES RECOMMENDED SERVER

Tesla P100 with NVLink™

Deep Learning Training 10X faster deep learning training vs. last-gen GPUs
  • 21 TeraFLOPS half-precision
  • 11 TeraFLOPS single-precision
  • 160 GB/s NVLink™ Interconnect
  • 720 GB/s memory bandwidth
  • 16 GB of HBM2 memory
Up to 4x P100 NVlink GPUs Up to 8x P100 NVlink GPUs

Tesla P100 PCI-E

HPC and Deep Learning Replace 32 CPU servers with a single P100 server for HPC and deep learning
  • 4.7 TeraFLOPS double-precision
  • 9.3 TeraFLOPS single-precision
  • 540/720 GB/s memory bandwidth
  • 12/16 GB of HBM2 memory
Up to 4x P100 PCIe GPUs Up to 8x P100 PCIe GPUs Up to 10x P100 PCIe GPUs

Tesla P40

Deep Learning Training and Inference 40X faster deep learning inference than a CPU server
  • 47 TeraOPS INT8 inference
  • 12 TeraFLOPS single-precision
  • 24 GB of HBM2 memory
  • 1 decode and 2 encode video engines
Up to 4x P40 PCIe GPUs Up to 8x P40 PCIe GPUs

Tesla P4

Deep Learning Inference and Video Transcoding 40X higher energy efficiency than a CPU for inference
  • 22 TeraOPS INT8 inference
  • 5.5 TeraFLOPS single-precision
  • 1 decode and 2 encode video engines
  • 50 W/75 W Power
  • Low profile form factor
1-2 GPUs per node
  • Broad compatibility thanks to the low-profile form factor and low power consumption

Request a Free Quote

Persönliche Informationen

Nachricht