High-Performance Computing (HPC) is used for scientific, technical and commercial tasks in the calculation, modeling and simulation of complex systems and the processing of large amounts of data. sysGen has been a successful solution provider for HPC clusters for more than 20 years and had supplied the most powerful HPC clusters with GPU coprocessors in the German-speaking countries.
sysGen supports the following HPC Cluster types:
sysGen HPC Cluster Managers make it easy to deploy and manage:
Our software automates the process of building and managing Linux clusters in your data center and in the cloud:
READ/link: Deep learning and high-performance computing are converging, and the required infrastructure and cluster software are virtually identical for both applications. Take a look at our solutions pages and get an idea of the extreme performance of Tesla V100 solutions.
READ/link: You should pay special attention to the world's most powerful HPC/DL server DGX-2. The DGX-s has 16 V100 cards connected bi-directionally via 12 NVSwitches at 2.4 TB/s and work like a single virtual GPU with 512GB memory. Thus, complex tasks are solved at a fraction of the previous computing time.
Exploding data volumes and increasingly complex workloads such as artificial intelligence is creating an urgent need Best Practices for high performance computing (HPC).
The advantages of a high-density data center include:
It all starts with Workloads to be handled. There is a wide range of Workloads that need the computational resources that clusters can offer.
For example, there is HPC, Big Data using HADOOP or Spark or Cassandra, Machine Learning or Deep Learning Workloads with Tensor Flow, Torch, Theano and Caffe and then workloads running inside docker containers orchestrated by Kubernetes or Mesos and the workloads running into these containers are micro services, machine learning, Big Data, etc.
Clusters are built always from a bunch of hardware servers running Linux with a network connecting the servers. However, Clusters are hard to setup and then to manage afterwards. For these reasons, sysGen recommends using the Bright Cluster Manager to make cluster management easier. Bright ties everything together and makes all of the hardware inside of your cluster like servers, switches, GPU Units, appearing as a logical unit of your node.
On every server there is a weight light cluster management daemon and these daemons are communicating to each other to make the cluster manageable. All of these workloads come up with frame works. For HPC you submit jobs with SLURM Workload Manager, for Big Data you install HADOOP or SPARC, for containers you have to use Kubernetics or MESOS. But also, these applications and infrastructure frameworks are also hard to set up. They are therefore integrated in the Bright Cluster Manager to ease life.
Furthermore, you are supported to integrate Applications like OpenStack, HADOOP, SPARC, Tensorflow, Caffe, Torch, etc. and afterwards Bright allows you to manage the Frameworks.
Once these applications are set up, it will permit the administrators to manage the interface that will help them on one hand to monitor and health check the workload frameworks and on the other hand to control them. So, it is possible to use the same cluster to host containerized micro servers and run HPC jobs at the same time. You can dedicate certain nodes in your cluster in order to run a particular type of workload. It is possible to change nodes easily, so you can repurpose them manually as well as automatically. One can also monitor the workload within the framework and reassign nodes according on the policies you determined before.
If In this example your big data workload goes down and your HPC workload goes up, Bright can adjust to those needs. Bright also makes it easy to extend your on-premise infrastructure to the public clouds.
The advantage of Bright is that it uses the same software image to provision your cloud nodes as is being used on your own premise nodes. That means it uses the same image, the same management interface and the same daemon wherever the job is running.
Now we´ve not talked about running virtualized workloads yet. The recommended way to talk about the virtualized workload is in bright using the application Openstack. Openstack is hard to deploy now bright provides a certified Openstack distribution that streamlines the entire deployment process and provides a management interface after the deployment is complete. Of course, you can provision VMs through OpenStack with any OpenStack image you like but it is also possible to deploy VMs using a bright software image. In this way, the VMs can be provisioned, configured, monitored, and health checked as if they were part of the cluster in which they are contained.
Another interesting use case is cluster on-demand which allows an administrator to spend up virtualized bright clusters inside of OpenStack. This gives Organizations the ability to give individual users or groups of users their own virtualized bread cluster with which they can do whatever they want. Naturally, the virtualized clusters to be used for all sorts of purposes running HPC big data or machine learning workloads, virtualized clusters can be resized very easily, so more nodes can be added or removed whenever necessary.
Now some organizations want to be able to use cluster on-demand but they don´t want the overhead of virtualization, so in bright 8.0, these users will be able to use ironic in combination with cluster on-demand. What this allows the administrator to do is effectively create sub clusters inside of their large cluster. Each sub cluster is completely independent from the cluster at which it´s contained and any other sub clusters. And if the workload demand inside these frameworks changes ,we can change what type of nodes are running inside these virtual clusters just like we did on the regular clusters and from each of these clusters it´s possible to burst to the public club and the clusters can be extended with resources from AWS or Azure , and again the same software images the same daemon, the same management interface wherever the job is running.
Lastly, the number of nodes that you´re running in the public cloud can be increased or decreased based on workload. As cloud workload decreases, Bright can spin down these instances to save you money. In addition to that, if workload continues to decrease we can shut down physical nodes on your cluster to save power as the workload goes back up or Bright can quickly power on and provision these nodes to handle the demand.
Diese Seite verwendet Cookies für eine noch bequemere Nutzung. Durch das Verwenden unserer Seite akzeptieren Sie die Cookie-Nutzung.
This page uses cookies for an even more convenient use. By using our site, you accept the use of cookies.
Privacy Settings (Google Analytics)