NEW TECHNIQUES CHANGE EVERYTHING, WAITING IS NOT AN OPTION, THE TRAIN ROLLS ...

Deep Learning is the fastest growing segment in machine learning/artificial intelligence. It enables the development of complex, multi-layer algorithms based on Deep Neural Networks (DNN). Deep Learning enabled the breakthrough in object, face, image/language recognition, machine translation, big data analysis and natural language processing. The transformative effect of Deep Learning influences many industries such as security, social media, retail, finance and industry 4.0/IoT (Internet of Things). This leads to a fundamentally new way of thinking about the data, technology, products and services offered.

Modern high performance computing (HPC) data centers are key to solving some of the world’s most important scientific and engineering challenges. NVIDIA® Tesla® accelerated computing platform powers these modern data centers with the industry-leading applications to accelerate HPC and AI workloads.

WHY SYSGEN PLAYS A STRONG ROLE IN HPC AND DEEP LEARNING SUPERCOMPUTING

Deep Learning is the fastest growing segment in machine learning/artificial intelligence. It enables the development of complex, multi-layer algorithms based on Deep Neural Networks (DNN). Deep Learning enabled the breakthrough in object, face, image/language recognition, machine translation, big data analysis and natural language processing. The transformative effect of Deep Learning influences many industries such as security, social media, retail, finance and industry 4.0/IoT (Internet of Things). This leads to a fundamentally new way of thinking about the data, technology, products and services offered.

Modern high performance computing (HPC) data centers are key to solving some of the world’s most important scientific and engineering challenges. NVIDIA® Tesla® accelerated computing platform powers these modern data centers with the industry-leading applications to accelerate HPC and AI workloads.

SYSGEN HAS MORE THAN 20 YEARS OF IT EXPERIENCE IN RESEARCH ANDDEVELOPMENT

  • sysGen has a strong partnership with NVIDIA, the leading developer of HPC and Deep Learning hardware and software
  • sysGen has a strong partnership with Supermicro. Supermicro's NVIDIA Tesla supported SuperServers® establish Supermicro as the true global leader in High-Performance, Enterprise-Class SuperComputing and GreenIT.
  • sysGen has a strong partnership BeeGFS, offering the leading Parallel Cluster File System, developed to deliver high performance and very high fault tolerance
  • sysGen offers total Cluster and Deep Learning Solutions, including advanced Management software as turnkey solution

With CUDA and OpenCL, two GPU programming environments are available which enable the use of GPUs for applications that can be parallelized:

  • CUDA is NVIDIA's proprietary programming environment for supporting its own hardware. It contains a C-based programming language.
  • OpenCL was originally developed by Apple, but is now managed by the Khronos Group, which maintains a number of open standards in the audiovisual media sector. OpenCL is an open standard that supports multi-vendor hardware, including desktop and laptop GPUs from AMD/ATI and NVIDIA. In the event that a supported GPU is not available, OpenCL will ensure that appropriate applications are running on the fallback host CPU.
  • CUDA is the more mature environment and has easy-to-use high-level APIs. OpenCL has the advantage of the open standard. Intel has announced that it will support OpenCL on future CPU products.
  • Both CUDA and OpenCL are supported by major operating systems (Windows, Linux and MacOS).

Reliability, performance and efficiency:

  • Performance: Up to 16 GPGPU Cards connected via NVSWITCH for maximum computing power:NVIDIA NVSwitch is the first on-node switch architecture to support 16 fully-connected GPUs in a single server node and drive simultaneous communication between all eight GPU pairs at an incredible 300 GB/s each. These 16 GPUs can be used as a single large-scale accelerator with 0.5 Terabytes of unified memory space and 2 petaFLOPS of deep learning compute power.
  • Max. Bandwidth: 6x PCI-E 3.0 x16,1x PCI-E 3.0 x8 or other flexible configurations
  • System-Management: Server management & GPGPU status monitoring via IPMI 2.0
  • Reliability: Redundant power supplies and intelligent cooling control
  • EFFIZIENZ: Outstanding system architecture for optimizing the TCO, with platinum power supply units, modern cooling and high-end mainboard components