Systeme und Informatikanwendungen Nikisch GmbHsysGen GmbH - Am Hallacker 48a - 28327 Bremen - info@sysgen.de

Welcome to the new website of sysGen. Please use our contact form if you have any questions about our content

.
KEYNOTE NOVEMBER 9 CONFERENCE & TRAINING NOVEMBER 8-11,2021REGISTER HERE
Due to the widening chip crisis and the resulting, significant price increases of the major IT manufacturers, online price calculations are currently not possible. We therefore point out that price inquiries via our website may differ from the final offer!

SETTING THE STANDARD FOR AI INFRASTRUCTURE

Organizations undergoing AI transformation need a platform for AI infrastructure that improves on traditional approaches, which in the past included slow compute architectures separated by analytics, training and inference workloads. The old approach introduced complexity, drove up costs and limited scalability. This is why NVIDIA developed the NVIDIA DGX A100.

The NVIDIA DGX A100 is the 3rd generation of the world's most advanced system designed specifically for AI and Data Science. It revolutionizes the enterprise data center with an infrastructure that unifies AI and Data Science workloads. This new, universal platform and architecture provides:

THE PORTFOLIO OF WORLD-LEADING PURPOSE-BUILT AI SYSTEMS

AI Workgroup System

Workstations of the server class
are ideal for experiments and team development. No data center required.

AI training, inference, and analytics.

A range of server solutions to help you tackle the most complex AI challenges.

Scaled AI infrastructure solution

Industry-standard infrastructure designs for AI companies
Request a quoteMORE INFOS

Turnkey AI infrastructure

Industry-leading full-cycle infrastructure - the fastest path to AI innovation at scale.
Request a quoteMORE INFOS

ADVANTAGES

With optimized AI software, including an AI-optimized base operating system, you can simplify deployment and be more productive in hours instead of months.

MORE INFOS

DGX is the core building block of a number of TOP500 supercomputers and has been adopted by many leading companies.

Request a quoteMORE INFOS

The architecture is proven for multi-node scalability and was developed with industry leaders in storage, compute and networking.

Request a quoteMORE INFOS

NVIDIA DGXperts is a global team of over 14,000 AI-fluent professionals AI professionals who have acquired a wealth of experience over the last decade to help you maximize the value of your DGX investment.

Request a quoteMORE INFOS

GAME-CHANGING PERFORMANCE: NVIDIA A100 TENSOR CORE GPU

Game-changing performance based on the NVIDIA A100 GPU, delivering the world's first AI system with.
​​​​​​​5 PFLOPS that can effortlessly run analysis, training and inference workloads simultaneously.
The NVIDIA DGX A100 features eight NVIDIA A100 Tensor Core GPUs that provide users with unmatched acceleration, and is fully optimized for NVIDIA CUDA-X™ software and the end-to-end NVIDIA Data Center Solution Stack.

NVIDIA A100 GPUs offer a new precision, TF32, which works like FP32 to deliver 20x higher FLOPS performance for AI compared to the previous generation - and best of all, no code changes are required to achieve this speed increase. And when using NVIDIA's automatic mixed-precision, the A100 provides an additional 2x performance boost with just one additional line of code at FP16 precision. The A100 GPU also features class-leading memory bandwidth of 1.6 terabytes per second (TB/s), an increase of more than 70% over the last generation. In addition, the A100 GPU has significantly more on-chip memory, including a 40 MB Level 2 cache that is nearly 7 times larger than the previous generation, maximizing compute performance. The DGX A100 also introduces the next-generation NVIDIA NVLink™, which doubles the direct GPU-to-GPU bandwidth to 600 gigabytes per second (GB/s), which is nearly 10X higher than PCIe Gen 4, and a new NVIDIA NVSwitch that is 2X faster than the last generation.

This unprecedented performance delivers the fastest time-to-solution for training, inference, and analytics workloads, enabling users to address challenges that were not possible or practical before.

UNMATCHED FLEXIBILITY: NEW MULTI INSTANCE GPU (MIG) INNOVATION

Unmatched flexibility with Multi-Instance GPU (MIG) innovation that enables 7x inference performance per GPU and the ability to allocate resources that are right-sized for specific workloads.
MIG partitions a single NVIDIA A100 GPU into up to seven independent GPU instances. These run concurrently, each with its own memory, cache, and streaming multiprocessors. This allows the A100 GPU to provide guaranteed quality-of-service (QoS) at up to 7x higher utilization compared to previous GPUs.
Because MIG compartmentalizes GPU instances, it provides fault isolation - a problem in one instance does not affect others running on the same physical GPU. Each instance provides guaranteed QoS, ensuring that users get the latency and throughput they expect for their workloads.
With the DGX A100, you can use up to 56 MIG slices to solve problems with inflexible infrastructures and accurately allocate compute power to each workload. You no longer have to struggle to divide time on a box among multiple competing projects. With MIG on DGX A100, you have enough compute power to support your entire data science team.

LOW TOTAL COST OF OWNERSHIP: UNIVERSAL AI PLATFORM

Unparalleled TCO/ROI metrics with all the performance of a modern AI data center
at 1/10 of the cost
, ​​​​​​​1/25 of the space, and 1/20 of the power
Today's AI data center
  • 25 Rack for training & inference
  • 630 kW
  • $11M
DGX A100 Data Center
  • 1 rack (5 x DGX A100s)
  • 28 kW
  • $1M
Traditional AI infrastructures typically consist of three separate specialized clusters: training (GPU-based), inference (often CPU-based), and analysis (CPU-based). These inflexible infrastructure silos were never designed for the pace of AI. Most data centers dealing with AI workloads will likely find that these resources are either over- or under-utilized at any given time. The DGX A100 data center with MIG provides you with a single system that can flexibly adapt to your workload requirements.
In many data centers, the demand for computing resources rises and falls, resulting in servers that are mostly underutilized. IT ends up having to buy excess capacity to protect against occasional spikes. With DGX A100, you can now right-size resources for each job and increase utilization, which lowers TCO.

With DGX A100 data centers, you can easily adapt to changing business needs by deploying a single elastic infrastructure that is much more efficient.

DGX STATION A100: THE WORKGROUP APPLIANCE FOR THE AGE OF AI

The NVIDIA DGX Station A100 brings AI supercomputing to data science teams, providing data center technology without a data center or additional IT infrastructure. It is the world's only workstation-style system with four fully interconnected NVIDIA A100 Tensor Core GPUs, the same GPUs as the NVIDIA DGX A100 server, interconnected via third-generation, high-bandwidth NVIDIA NVLink. It also uses a top-of-the-line server-grade CPU, super-fast NVMe storage, and state-of-the-art PCIe Gen4 buses.
DGX STATION A100 PERFORMANCE
AN AI DEVICE YOU CAN PLACE ANYWHERE
The DGX Station A100 is suitable for use in a standard office environment without special power or cooling and simply plugs into any standard power outlet. It also includes the same Baseboard Management Controller (BMC) as the NVIDIA DGX A100, so system administrators can perform all necessary tasks via a remote connection. And its innovative cooling-based design means it feels cool to the touch.
SUPERCOMPUTING FÜR DATA-SCIENCE-TEAMS
DGX Station A100 can run training, inference, and analytics workloads in parallel, and with MIG can provision up to 28 separate GPU devices for individual users and jobs, so activity is limited and does not impact overall system performance.
SUPPORTED BY THE DGX SOFTWARE STACK
The DGX Station A100 shares the same fully optimized NVIDIA DGX™ software stack as all DGX systems, providing maximum performance and full interoperability with DGX-based infrastructure.

WHY NVIDIA DGX ?

Almost every company recognizes the importance of AI for true business transformation. In a recent study, 84% of executives surveyed fear they will not achieve their growth goals if they do not scale AI.  However, nearly 76% also said they are struggling to scale AI in their business. Many companies are hindered by the complexity and cost of deploying the right infrastructure. And for most organizations, one of the biggest AI challenges is finding AI infrastructure experts and implementing IT-proven platforms that deliver predictable and scalable performance. As a result, more and more enterprises are choosing NVIDIA's highly optimized DGX systems to power their infrastructures and enable their on-prem AI initiatives.
Access to NVIDIA AI know-how
NVIDIA DGX systems come with direct access to NVIDIA AI experts trained on our in-house infrastructure, NVIDIA SATURNV. They have the most extensive track record of field-proven implementations, giving customers the fastest time to solution.
Fully optimized, field-tested
NVIDIA DGX systems are AI supercomputers with a full-stack solution that integrates innovation and optimization across hardware and software. Thousands of customers have deployed DGX systems to date, including nine of the top 10 government institutions and eight of the top 10 universities in the US.
Supported by the largest test site for AI
NVIDIA DGX SATURNV is the massive AI infrastructure where NVIDIA's most important work is done. With thousands of DGX AI systems, NVIDIA is constantly finding opportunities for improvements and enhancements that are then rolled out to DGX customers.
Trusted infrastructure solutions
Only NVIDIA DGX offers the most comprehensive portfolio of infrastructure solutions built on the most trusted names in data center storage and networking, allowing customers to scale as their business grows.
The easiest and fastest way to deploy AI
NVIDIA and its partners offer a full range of options, from DGX-as-a-Service to DGX colocation solutions to full-fledged DGX PODs deployed on-premise to help organizations deploy AI more easily, quickly and cost-effectively.