NVIDIA Tesla P100

18.04.2016
von Webadmin
sysGen Newsletter - Tesla P100
sysGen Storage Header
Storage Banner

The Most Advanced Datacenter GPU Ever Built

 

Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Even in its early stages, deep learning is having a tremendous impact and is sweeping across every industry. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's large-scale data center relies on many interconnected commodity compute nodes, limiting the performance needed to drive these important workloads. Now, more than ever, the data center must prepare for the high-performance computing and hyperscale workloads being thrust upon it.

 

The NVIDIA® Tesla® P100 is purpose-built as the most advanced data center accelerator ever. It taps into an innovative new GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity compute nodes. Lightning-fast nodes powered by Tesla P100 accelerate time-to-solution for the world's most important challenges that have infinite compute needs in HPC and deep learning.

 

Tesla P100 and NVLink Delivers up to 50x Performance Boost for Data Center Applications

 

NVIDIA TESLA P100 ACCELERATOR FEATURES AND BENEFITS

The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest compute node.

 

Exponential Performance Leap with Pascal Architecture

The new NVIDIA Pascal™ architecture enables the Tesla P100 to deliver the highest absolute performance for HPC and hyperscale workloads. With more than 21 TeraFLOPS of FP16 performance, Pascal is optimized to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 TeraFLOPS of double and single precision performance for HPC workloads.

Applications at Massive Scale with NVIDIA NVLink

Performance is often throttled by the interconnect. The revolutionary NVIDIA NVLink™ high-speed bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5x higher performance compared to today's best-in-class technology.

Unprecedented Efficiency with CoWoS with HBM2

The Tesla P100 tightly integrates compute and data on the same package by adding CoWoS® (Chip-on Wafer-on-Substrate) with HBM2 technology to deliver 3x memory performance over NVIDIA Maxwell™ architecture. This provides a generational leap in time-to-solution for data-intensive applications.

Simpler Programming with Page Migration Engine

Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU’s physical memory size, to virtually limitless amount of memory.

 

NVIDIA TESLA P100 ACCELERATOR SPECIFICATION

  • 5.3 TeraFLOPS double-precision performance with NVIDIA GPU Boost™
  • 10.6 TeraFLOPS single-precision performance with NVIDIA GPU Boost™
  • 21.2 TeraFLOPS half-precision performance with NVIDIA GPU Boost
  • 160 GB/s bidirectional interconnect bandwidth with NVIDIA NVLink
  • 720 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory
  • 16 GB of CoWoS HBM2 Stacked Memory
  • Enhanced Programmability with Page Migration Engine and Unified Memory
  • ECC protection for increased reliability
  • Server-optimized for best throughput in the data center
sysGen Logoinfo@sysgen.de sysGen Logo

Ihr sysGen Team