Tesla

Choose the Right NVIDIA ® Tesla ® Solution for You

PRODUCT KEY FEATURES RECOMMENDED SERVER

Tesla V100 with NVLink™


Deep Learning Training
3x faster deep learning training vs. last-gen P100 GPUs
  • 31.4 TeraFLOPS half-precision
  • 15.7 TeraFLOPS single-precision
  • 125 TeraFLOPS Deep Learning
  • 300 GB/s NVLink™ Interconnect
  • 900 GB/s memory bandwidth
  • 16 GB of HBM2 memory
Up to 4x V100 NVlink GPUs Up to 8x V100 NVlink GPUs

Tesla P40


Deep Learning Training and Inference
40X faster deep learning inference than a CPU server
  • 47 TeraOPS INT8 inference
  • 12 TeraFLOPS single-precision
  • 24 GB of HBM2 memory
  • 1 decode and 2 encode video engines
Up to 4x P40 PCIe GPUs Up to 8x P40 PCIe GPUs

Tesla P4


Deep Learning Inference and Video Transcoding
40X higher energy efficiency than a CPU for inference
  • 22 TeraOPS INT8 inference
  • 5.5 TeraFLOPS single-precision
  • 1 decode and 2 encode video engines
  • 50 W/75 W Power
  • Low profile form factor
1-2 GPUs per node
  • Broad compatibility thanks to the low-profile
    form factor and low power consumption

Tesla T4


Deep Learning Inferencing
60X higher energy efficiency than a CPU for inference
  • 130 TeraOPS INT8 inference
  • 260 TeraOPS INT4 inference
  • 8.1 TeraFLOPS single-precision
  • FP16/FP32: 65 TFLOPS
  • 75 W Power
  • Low profile form factor
1-20 GPUs per node
  • Broad compatibility thanks to the low-profile
    form factor and low power consumption
  • T4 Inference Servers
Back to Overview