Building an Enterprise-grade HPC and Deep Learning environment!

See more
Minimize

Building an
Enterprise-grade
HPC and Deep Learning environment!

The Deep Learning Workflow
The major operations from which deep learning produces insight are training and inference. Training feeds examples of objects to be detected/recognized like animals, traffic signs, etc., allowing it to make predictions, as to what these objects are. The training process reinforces correct predictions and corrects the wrong ones. Once trained, a production neural network can achieve upwards of 90-98% correct results. "Inference" is the deployment of a trained network to evaluate new objects, and make predictions with similar predictive accuracy.
Training and inference start with the forward propagation calculation, but training goes further. After forward propagation when training, the results from the forward propagation are compared against the (known) correct answer to compute an error value. A backward propagation phase propagates the error back through the network’s layers and updates their weights using gradient descent to improve the network’s performance on the task it is trying to learn.

Configure your sysGen systems individually

Find a perfect barebone with just a few clicks and start configuring


Server housing

Server Type
Enclosure height
Number of nodes in one housing

CPU / RAM

CPU Family
RAM expansion
GB  –  GB
  • 0GB
  • 12288GB

Storage

Storage Interface
Hotswap 2.5"
Hotswap 3.5"

Interfaces

Network onboard
Anzahl PCIe Schnittstellen
Anzahl GPUs

Use Case

Field of Application