Aufgrund stetiger Preiserhöhungen sämtlicher Hersteller, ist eine aktuelle Preiskalkulation online zurzeit nicht möglich.
Daher möchten wir darauf hinweisen, dass alle Preise für Anfragen über unsere Website, vom finalen Angebot abweichen können!

Persönliche Beratung

+49 421 40 966 0 oder per Mail

Networking Guidance

An increasing prevalence of cloud computing, the scaling of data centers and the strongly growing demand for increasements in high performance computing, big data, artificial intelligence / deep learning and storage have intensified the trend towards higher bandwidths. Transfer rates of 10, 25, 50 and 100 Gb/s become more widely used and even long-haul carriers with 100 Gb/s are getting more popular.

However, absolute transfer rate is not the only determining factor for networking performance. Modern HPC systems and storage clusters also require extremely low latency, which mostly impacts the lost time until an initiated data transfer begins or ends. Wasted time is always particularly high when many small data snippets or messages are sent.

Transmission protocols also play an important role. With Infiniband, Omni-Path and especially NVMe you have the right tools on your side, operating with significantly lower overhead than Ethernet.


  • Omnipresent, widely used
  • Pause and Prioritization (PPC) for lossless operation, which is supported by some switches and a few routers.

Infiniband, Omni-Path

  • Reliable bonding layer
  • Credit-based flow control
  • Rack and LAN technology (no MAN or WAN)

Omni-Path Features

Omni-Path provides Packet Integrity Protection (PIP), a link level error check that is applied to all data passing over the lines. PIP enables transparent detection and correction of occurring transmission errors. The use of Dynamic Lane Scaling (DLS) enables link continuity to be maintained in the event of track failure. The remaining tracks in the connection are used to continue operation. Traffic Flow Optimization (TFO) improves quality of service by prioritizing data packets, regardless of packet ordering.

Host integration

Omni-Path uses traditional host adapter cards for server-side connectivity. In the future, Xeon Phi processors and Xeon processors will have an in-package host adapter configuration with the Fabric ASIC integrated in the processor socket. The Omni-Path interface may even be integrated directly into the processor. This increases performance, energy efficiency and reduces costs by eliminating the need for additional hardware when a host adapter is connected to a PCIe bus.

Es gibt keine Artikel unter dieser Kategorie