Questions about your request or an order? Contact us
Systeme und Informatikanwendungen Nikisch GmbHsysGen GmbH - Am Hallacker 48 - 28327 Bremen - info@sysgen.de
Questions about your request or an order? Contact us
Cookie preferences
This website uses cookies, which are necessary for the technical operation of the website and are always set. Other cookies, which increase the comfort when using this website, are used for direct advertising or to facilitate interaction with other websites and social networks, are only set with your consent.
Configuration
Technically required
These cookies are necessary for the basic functions of the shop.
"Allow all cookies" cookie
"Decline all cookies" cookie
CSRF token
Cookie preferences
Currency change
Customer recognition
Customer-specific caching
Individual prices
Selected shop
Session
Comfort functions
These cookies are used to make the shopping experience even more appealing, for example for the recognition of the visitor.
The NGC catalog accelerates end-to-end workflows with enterprise-grade containers, pre-trained AI models, and industry-specific SDKs that can be deployed on-premises, in the cloud, or at the network edge.
Slurm is an open source workload manager designed specifically for the demanding requirements of high performance computing. Slurm is used in government labs, universities, and enterprises around the world. In the November 2014 Top 500 Computers list, Slurm performed workload management on six of the top ten most powerful computers in the world, including GPU giant Piz Daint, which uses over 5,000 NVIDIA GPUs.
GPU monitoring
NVIDIA Data Center GPU Manager (DCGM) is a tool suite for managing and monitoring NVIDIA GPUs in data center cluster environments. It includes active health monitoring, comprehensive diagnostics, system alerts, and governance policies including power and clock management....
Ganglia Monitoring System
A scalable, open source distributed monitoring system for high performance computing systems such as clusters and grids. it has been carefully designed to have very low overhead per node and high concurrency. Ganglia is currently deployed on thousands of clusters around the world and can scale to clusters with several thousand nodes.
Moab HPC Suite
Moab® HPC Suite is a workload and resource orchestration platform that automates complex, optimized workload scheduling decisions and management actions with multi-dimensional policies that mimic real-world decision making. These policies balance maximizing job continuity and utilization with meeting SLAs and priorities. Proven to manage the world's most advanced, diverse, and data-intensive systems, Moab HPC Suite is the preferred workload management solution for next-generation HPC facilities.
NVIDIA Triton Inference Server
Triton Inference Server streamlines AI inference by enabling teams to deploy, run, and scale trained AI models from any framework on any GPU- or CPU-based infrastructure. It gives AI researchers and data scientists the freedom to choose the right framework for their projects without impacting production deployment. It also helps developers deploy high-performance inference in the cloud, on-premises, on edge and embedded devices.
NVIDIA Magnum IO
The new compute unit is the data center, with NVIDIA GPUs and NVIDIA networks at its heart. Accelerated computing requires accelerated input/output (IO) to maximize performance. NVIDIA Magnum IO™, the IO subsystem of the modern data center, is the architecture for parallel, asynchronous, and intelligent data center IO that maximizes memory and network IO performance for multi-GPU and multi-node acceleration.
Cloud Native Support
Run:AI's Compute Management Platform automates the orchestration, scheduling, and management of GPU resources for AI workloads. The Kubernetes-based platform gives data scientists access to all the pooled compute power they need to accelerate AI - on premise or in the cloud. IT and MLOps teams gain visibility and control over GPU scheduling and dynamic provisioning, and can increase utilization of existing infrastructure by more than 2x.
Run: AI
Run:AI's Compute Management Platform automates the orchestration, scheduling, and management of GPU resources for AI workloads. The Kubernetes-based platform gives data scientists access to all the pooled compute power they need to accelerate AI - on premise or in the cloud. IT and MLOps teams gain visibility and control over GPU scheduling and dynamic provisioning, and can increase utilization of existing infrastructure by more than 2x.
NVIDIA DOCA
Accelerates the development of networking, storage, and security applications and services on BlueField DPUs.
What Just Happened
Advanced streaming telemetry technology that provides real-time insight into network issues for quick and easy resolution.
BlueField SNAP
BlueField SNAP brings virtualized storage to bare-metal clouds and makes composable storage easy by enabling storage disaggregation.
NVIDIA Morpheus
Provides cybersecurity teams with complete visibility into security threats.
AI Enterprise
Enterprises are modernizing their data centers to run AI-driven applications and data science. NVIDIA and VMware make it easier than ever to develop and deploy a variety of different AI workloads in the modern hybrid cloud....
Run complete data science workflows with high-speed GPU computing power and parallelize data loading, data manipulation, and machine learning for 50x faster end-to-end data science pipelines....
NVIDIA Fleet Command™ is a managing cloud service that securely deploys, manages, and scales AI applications across a distributed edge infrastructure....
NVIDIA Omniverse™ is an open platform designed for real-time collaboration between virtual 3D designers and physically correct simulations. Complex visual workflows can be collaboratively edited and enhanced.
NVIDIA Clara™ Holoscan is the AI computing platform for medical devices that includes hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI ...
Speech AI is transforming the way businesses interact with and support their customers across all industries. NVIDIA® Riva provides state-of-the-art models, fully accelerated pipelines, and tools to extend real-time applications such as virtual assistants, call center agents, and video conferencing with speech AI capabilities....
NVIDIA Maxine is a suite of GPU-accelerated SDKs that reinvent audio and video communications with AI and enhance standard microphones and cameras for clear online communications....
Software transforms a vehicle into an intelligent machine. The open NVIDIA DRIVE® SDK provides developers with all the building blocks and algorithms needed for autonomous driving. It enables developers to efficiently create and deploy a wide range of cutting-edge AV applications....
NVIDIA Metropolis consists of an application framework, a set of developer tools, and a partner ecosystem that brings together visual data and AI to improve operational efficiency and safety across a variety of industries....
Merlin enables data scientists, machine learning engineers, and researchers to build powerful recommendation programs at scale. Merlin includes libraries, methods, and tools that streamline the creation of recommendation programs by addressing common challenges in preprocessing, feature engineering, training, and inference....
The NVIDIA Isaac™ Robotics Platform can address these challenges with an end-to-end solution to reduce costs, simplify development and accelerate time to market....
NVIDIA Aerial™ is an application framework for developing high-performance, software-defined, cloud-native 5G applications to meet increasing consumer demand. Optimize your results with parallel processing on the GPU for baseband signals and data flow....
The NVIDIA HPC Software Development Kit (SDK) includes the proven compilers, libraries, and software tools essential for maximizing developer productivity, performance, and portability of HPC applications....
Multi-Instance GPU (MIG) extends the performance and value of any NVIDIA A100 Tensor Core GPU. MIG can partition the A100 GPU into up to seven instances, each fully isolated and with its own high-bandwidth memory, cache, and compute cores....
This website uses cookies, which are necessary for the technical operation of the website and are always set. Other cookies, which increase the usability of this website, serve for direct advertising or simplify interaction with other websites and social networks, will only be used with your consent.