Our short and comprehensive overview about NVIDIA Tesla Products

Choose the Right NVIDIA ® Tesla ® Solution for You

With NVIDIA's Tesla Architecture you'll get the performance you need, to speed up your Deep Learning and HPC workflow.

Modern high performance computing (HPC) data centers are key to solving some of the world’s most important scientific and engineering challenges. NVIDIA® Tesla® accelerated computing platform powers these modern data centers with the industry-leading applications to accelerate HPC and AI workloads. The Tesla V100 GPU is the engine of the modern data center, delivering breakthrough performance with fewer servers, less power consumption, and reduced networking overhead, resulting in total cost savings of 5 to 10x.

Below you will find a short comprehensive comparison of the various products and their use case.


Tesla V100 with NVLink™

Deep Learning Training
3x faster deep learning training vs. last-gen P100 GPUs
  • 31.4 TeraFLOPS half-precision
  • 15.7 TeraFLOPS single-precision
  • 125 TeraFLOPS Deep Learning
  • 300 GB/s NVLink™ Interconnect
  • 900 GB/s memory bandwidth
  • 32 GB of HBM2 memory
Up to 4x V100 NVlink GPUs Up to 8x V100 NVlink GPUs

Tesla T4

Deep Learning Inferencing
60X higher energy efficiency than a CPU for inference
  • 130 TeraOPS INT8 inference
  • 260 TeraOPS INT4 inference
  • 8.1 TeraFLOPS single-precision
  • FP16/FP32: 65 TFLOPS
  • 75 W Power
  • Low profile form factor
1-20 GPUs per node
  • Broad compatibility thanks to the low-profile
    form factor and low power consumption
  • T4 Inference Servers

Related Posts