Aufgrund stetig erfolgenden Preiserhöhungen sämtlicher Hersteller ist eine aktuelle Preiskalkulation online zurzeit nicht möglich.
Daher möchten wir darauf hinweisen, das alle Preise für Anfragen, über die Website, vom finalen Angebot abweichen können!

Tesla GPUs


Accelerate your most demanding HPC and hyperscale data center workloads with NVIDIA® Tesla® GPUs. Data scientists and researchers can now parse petabytes of data orders of magnitude faster than they could using traditional CPUs, in applications ranging from energy exploration to deep learning. Tesla accelerators also deliver the horsepower needed to run bigger simulations faster than ever before. Plus, Tesla delivers the highest performance and user density for virtual desktops, applications, and workstations.

More Information
TCSV100MPCIE-PB; 8-pin CPU power connector; Package Content: 1 x Power adapter (2x PCIe 8-pit auf single CPU 8-pin); CUDA, DirectCompute, OpenCL ™ , OpenACC; Memory Bus Width: 4029bit; Memory Bandwidth: 1134GB/s; 112 Tflops (GPU Boost Clocks)TFLOPS; 640Tensor Cores; CUDA Processing Cores: 5120Cores; Boost Clock: 1300MHz; Single Precision FP32: 16.4TFLOPS; Double Precision FP64: 8.2TFLOPS


Artificial intelligence for self-propelled cars. Forecasts for our climate. A new drug for cancer treatment. For some of the most important challenges in the world, we urgently need to find a solution that requires enormous computing effort. Today's data centers rely on many interconnected standard computing nodes. However, this slows down the performance required for important calculations in the areas of High Performance Computing (HPC) and Hyperscale. The NVIDIA® Tesla® V100 is the most advanced graphics processor ever designed for the data center. They use the new NVIDIA Volta™ GPU architecture to deliver the fastest computing node in the world, with performance equivalent to hundreds of slower standard computing nodes. Higher performance with fewer, faster-paced nodes enables data centers to significantly increase throughput while saving money. More than 400 HPC applications, 9 of them from the 10 most important, and all deep learning frameworks have already been accelerated. Every HPC customer can therefore use accelerating GPUs in their data center.
7.111,95 (8.463,22 inkl. MwSt.)

NVIDIA A100; 250W; 40GB; 1,555GB/s; 4.0; 16

NVIDIA A100 - GPU Computing Processor - A100 Tensor Core - 40 GB HBM2 - PCIe 4.0 x16

The NVIDIA® A100 PCIe delivers 40GB memory, third generation tensor cores, and the ability to create up to 7 vGPU’s with NVIDIA’s Multi-Instance GPU Ampere architecture. The A100 PCIe is now shipping. Contact us for more details and to build your A100 based solution.

8.188,90 (9.744,80 inkl. MwSt.)

TCST4M-PB; Memory Bus Width: 256bit; 65TFLOPS; 320GB/s; 320Tensor Cores; CUDA Processing Cores: 2560Cores; Single Precision FP32: 8.1TFLOPS; 8-Bit-Integer: 130TOPS

NVIDIA-T4 Powering Scale-Out AI Training and Inference

Supercharge any server with NVIDIA® T4 GPU, the world’s most performant scale-out accelerator. Its low-profile, 70W design is powered by NVIDIA Turing™ Tensor Cores, delivering revolutionary multi-precision performance to accelerate a wide range of modern applications. This advanced GPU is packaged in an energy-efficient 70-watt, small PCIe form factor, optimized for scale-out servers and purpose-built to deliver state-of-the-art AI.

1.748,45 (2.080,66 inkl. MwSt.)