Building an Enterprise-grade HPC and Deep Learning environment!

See more
Minimize

Building an
Enterprise-grade
HPC and Deep Learning environment!

The Deep Learning Workflow
The major operations from which deep learning produces insight are training and inference. Training feeds examples of objects to be detected/recognized like animals, traffic signs, etc., allowing it to make predictions, as to what these objects are. The training process reinforces correct predictions and corrects the wrong ones. Once trained, a production neural network can achieve upwards of 90-98% correct results. "Inference" is the deployment of a trained network to evaluate new objects, and make predictions with similar predictive accuracy.
Training and inference start with the forward propagation calculation, but training goes further. After forward propagation when training, the results from the forward propagation are compared against the (known) correct answer to compute an error value. A backward propagation phase propagates the error back through the network’s layers and updates their weights using gradient descent to improve the network’s performance on the task it is trying to learn.

Configure your sysGen systems individually

Find a perfect barebone with just a few clicks and start configuring


Server housing

Enclosure height
Rack mountable enclosures
Standalone, Desktop, IoT
Number of nodes in one housing

CPU / RAM

CPU Onboard (e.g. Intel Atom, Intel Quark)
CPU Family
Number of CPUs per node
RAM expansion
RAM DIMM

Storage

Storage Drives
Maximum storage expansion
Storage Interface

Interfaces

Network onboard
Anzahl PCIe Schnittstellen
Anzahl GPUs