Questions about your request or an order? Contact us
Systeme und Informatikanwendungen Nikisch GmbHsysGen GmbH - Am Hallacker 48 - 28327 Bremen - info@sysgen.de
Questions about your request or an order? Contact us
Cookie preferences
This website uses cookies, which are necessary for the technical operation of the website and are always set. Other cookies, which increase the comfort when using this website, are used for direct advertising or to facilitate interaction with other websites and social networks, are only set with your consent.
Configuration
Technically required
These cookies are necessary for the basic functions of the shop.
"Allow all cookies" cookie
"Decline all cookies" cookie
CSRF token
Cookie preferences
Currency change
Customer recognition
Customer-specific caching
Individual prices
Selected shop
Session
Comfort functions
These cookies are used to make the shopping experience even more appealing, for example for the recognition of the visitor.
GPU-OPTIMISED SOFTWARE HUB FOR KI, MACHINE LEARNING AND HIGH-PERFORMANCE COMPUTING
The NGC™ catalogue is a hub for GPU-optimised AI, high-performance computing (HPC) and data analytics software that simplifies and accelerates end-to-end workflows. With enterprise-grade containers, pre-trained AI models and industry-specific SDKs that can be deployed on-premises, in the cloud or at the edge, organisations are able to build world-class solutions and deliver business value faster than ever before.
The NGC catalogue increases productivity through easy-to-implement, optimised AI frameworks and HPC application containers - allowing users to focus on developing their solutions.
SIMPLE KI INTRODUCTION
The NGC catalogue lowers the inhibition threshold for the introduction of AI, does the rough work (know-how, time and computing resources) with pre-trained models and workflows at highest precision and performance.
SOFTWARE EVERYWHERE WITH RUN NVIDIA GRAPHICS PROCESSORS
Run software from the NGC catalogue locally, in the cloud, peripherally or in hybrid and multi-cloud deployments. Software from the NGC catalogue can be deployed on bare metal servers or in virtualised environments, maximising GPU utilisation, application portability and scalability. and application portability and scalability.
DEPLOY NGC SOFTWARE WITH CONFIDENCE
Run software from the NGC catalogue with enterprise-class support for NVIDIA certified systems and get direct access to NVIDIA experts. This minimises system downtime and maximises system utilisation and productivity.
A Platform for All Use Cases
From HPC to conversational AI to medical imaging to recommender systems and more, NGC Collections offers ready-to-use containers, pre-trained models, SDKs, and Helm charts for diverse use cases and industries—in one place—to speed up your application development and deployment process.
Language Modeling
Language modeling is a natural language processing (NLP) task that determines the probability of a given sequence of words occurring in a sentence. VIEW LANGUAGE MODELING COLLECTION
Recommender Systems
Recommender systems are a type of information filtering system that seeks to predict the "rating" or "preference" a user would give to an item. VIEW RECOMMENDER SYSTEMS COLLECTION
Image Segmentation
Image segmentation is the field of image processing that deals with separating an image into multiple subgroups or regions that represent distinctive objects or subparts. VIEW IMAGE SEGMENTATION COLLECTION
Translation
Machine translation is the task of translating text from one language to another. VIEW TRANSLATION COLLECTION
Object Detection
Object detection is about, not only detecting the presence and location of objects in images and videos, but also categorizing them into everyday objects. VIEW OBJECT DETECTION COLLECTION
ASR
Automatic speech recognition (ASR) systems include giving voice commands to an interactive virtual assistant, converting audio to subtitles on an online video, and more. VIEW ASR COLLECTION
Text-to-Speech
Speech synthesis or text-to-speech is the task of artificially producing human speech from raw transcripts. Text-to-speech models are used when a mobile device converts text on a webpage to speech. VIEW SPEECH SYNTHESIS COLLECTION
HPC
High-performance computing (HPC) is one of the most essential tools fueling the advancement of computational science, and that universe of scientific computing has expanded in all directions. VIEW HPC COLLECTION
ENTERPRISE-READY
Performance
From deep learning containers that are updated on a monthly basis for extracting maximum performance from your GPUs to the state-of-the-art AI models used to set benchmark records in MLPerf, the NGC catalog is a vital component in achieving faster time to solution and shortening time to market.
Security
Containers undergo rigorous security scans for common vulnerabilities and exposures (CVEs), crypto keys, private keys, and metadata before they’re posted to the catalog. LEARN MORE
Privacy
The NGC Private Registry provides a secure, cloud-native space to store your custom containers, models, model scripts, and Helm charts and share that within your organization. Access to the NGC Private Registry is available to customers who have purchased Enterprise Support with NVIDIA DGX™ or NVIDIA-Certified Systems™. LEARN MORE
Run Anywhere with Confidence
Software from the NGC catalog runs on bare-metal servers, Kubernetes, or on virtualized environments and can be deployed on premises, in the cloud, or at the edge—maximizing utilization of GPUs, portability, and scalability of applications. Users can manage the end-to-end AI development lifecycle with NVIDIA Base Command.
Software from the NGC catalog can be deployed on GPU-powered instances. The software can be deployed directly on virtual machines (VMs) or on Kubernetes services offered by major cloud service providers (CSPs). NVIDIA AI software makes it easy for enterprises to develop and deploy their solutions in the cloud.
At the Edge
As computing expands beyond data centers and to the edge, the software from NGC catalog can be deployed on Kubernetes-based edge systems for low-latency, high-throughput inference.
Create AI Applications Faster with NVIDIA TAO
NVIDIA TAO is a platform to train, adapt and optimize AI models that eliminates the need for large training sets and deep AI expertise, simplifying the creation of enterprise AI applications.
The NGC catalog provides a range of resources that meet the needs of data scientists, developers, and researchers with varying levels of expertise, including containers, pre-trained models,domain-specific SDKs, use-case-based collections, and Helm charts for the fastest AI implementations.
Deploy and Run Workloads Faster with Containers
The NGC catalog hosts containers for the top AI and data science software, tuned, tested, and optimized by NVIDIA. Fully tested containers for HPC applications and data analytics are also available, allowing users to build solutions from a tested framework with complete control.
Jumpstart Your AI Projects with Pre-trained Models and Resources
Get a head start with pre-trained models, detailed code scripts with step-by-step instructions, and helper scripts for a variety of common AI tasks that are optimized for NVIDIA Tensor Core GPUs. Models can be easily re-trained by updating just a few layers, saving valuable time.
Helm charts automate software deployment on Kubernetes clusters. The NGC catalog hosts Kubernetes-ready Helm charts that make it easy to consistently and secure deploy both NVIDIA and third-party software.
NVIDIA GPU Operator is a suite of NVIDIA drivers, container runtime, device plug-in, and management software that IT teams can install on Kubernetes clusters to give users faster access to run their workloads.
Build AI Solutions Faster with All of the Software You Need
Collections makes it easy to discover the compatible framework containers, models, Juptyer notebooks, and other resources to get started in AI faster. The respective collections also provide detailed documentation to deploy the content for specific use cases.
NGC catalog offers ready-to-use collections for various applications, including NLP, ASR, intelligent video analytics, and object detection.
Deliver Solutions Faster with Ready-to-Deploy AI Workflows
The NGC catalog features NVIDIA Transfer Learning Toolkit, NVIDIA Triton™ Inference Server, and NVIDIA TensorRT™ to enable deep learning application developers and data scientists to re-train deep learning models and easily optimize and deploy them for inference.
Easily deploy software from the NGC catalogue on any platform, including cloud, locally with NVIDIA certified systems or in the periphery, and add value to your investment with NGC support services.
The NGC catalogue software runs on a wide range of NVIDIA GPU-accelerated platforms, including NVIDIA certified systems, NVIDIA DGX™ systems, workstations with NVIDIA TITAN and NVIDIA Quadro® GPUs, virtual environments with NVIDIA Virtual Compute Server and major cloud platforms.
NVIDIA NGC support services provide assistance to businesses to ensure NVIDIA certified systems run optimally and maximise system utilisation and user productivity. With this service, enterprise IT professionals have direct access to NVIDIA experts to quickly resolve software issues and minimise system downtime.
NVIDIA partners offer a range of data science, AI training and inference, high-performance computing (HPC), and visualization solutions.
NGC Catalog Frequently Asked Questions
The NGC catalog provides a comprehensive collection GPU-optimized containers for AI, machine learning, and HPC that are tested and ready to run on supported NVIDIA GPUs on premises, in the cloud, or at the edge. In addition, the catalog provides pre-trained models, model scripts, and industry solutions that can be easily integrated into existing workflows.
Compiling and deploying deep learning frameworks can be time consuming prone to errors. Optimizing AI software requires expertise. Building models requires expertise, time, and compute resources. The NGC catalog takes care of these challenges with GPU-optimized software and tools that data scientists, developers, IT, and users can leverage so they can focus on building their solutions.
Each container has a pre-integrated set of GPU-accelerated software. The stack includes the chosen application or framework, NVIDIA CUDA® Toolkit, accelerated libraries, and other necessary drivers—all tested and tuned to work together immediately with no additional setup.
The NGC catalog features the top AI software, including TensorFlow, PyTorch, MXNet, NVIDIA TensorRT, RAPIDS™, and many more. Browse the NGC catalog to see the full list.
The NGC catalog containers run on PCs, workstations, HPC clusters, NVIDIA DGX systems, on NVIDIA GPUs on supported cloud providers, and NVIDIA-Certified Systems. The containers run in Docker and Singularity runtimes. View the NGC documentation for more information.
NVIDIA offers virtual machine image files in the marketplace section of each supported cloud service provider. To run an NGC container, simply pick the appropriate instance type, run the NGC image, and pull the container into it from the NGC catalog. The exact steps vary by cloud provider, but you can find step-by-step instructions in the NGC documentation.
The most popular deep learning software such as TensorFlow, PyTorch, and MXNet are updated monthly by NVIDIA engineers to optimize the complete software stack and get the most from your NVIDIA GPUs.
No, it’s a catalog that delivers GPU-optimized software stacks.
The NGC Private Registry was developed to provide users with a secure space to store and share custom containers, models, model scripts, and Helm charts within their enterprise. The Private Registry allows them to protect their IP while increasing collaboration.
Users get access to the NVIDIA Developer Forum, supported by a large community of AI and GPU experts from the NVIDIA customer, partner, and employee ecosystem. In addition, NGC Support Services provides L1-L3 support on NVIDIA-Certified Systems, available through NVIDIA OEM Partners like us.
NVIDIA-Certified Systems, consisting of NVIDIA EGX™ and HGX™ platforms, enable enterprises to confidently choose performance-optimized hardware and software solutions that securely and optimally run their AI workloads—both in smaller configurations and at scale. See full list of NVIDIA-Certified Systems or find optimal certified solutions from Supermicro and Gigabyte directly at sysgen.
This website uses cookies, which are necessary for the technical operation of the website and are always set. Other cookies, which increase the usability of this website, serve for direct advertising or simplify interaction with other websites and social networks, will only be used with your consent.