In-Memory-Computing / In-Memory-Database

In-memory-computing / In-memory-database

In science and business applications, in-memory computing is a necessary technology for processing of data stored in an "in-memory database". Older systems have been based on traditional storage media (HDDs &SSDs) and relational databases using SQL query language, but these are increasingly regarded as inadequate to meet todays science and business intelligence needs. Because stored data is accessed much more quickly when it is placed in random-access memory (RAM) or flash memory, instead of using HDD/SSD. "In-memory processing" allows data to be analysed in real time, enabling faster reporting and decision-making in business.

In-Memory-Computing at Scale – The Next Computing Frontier

IT department adoption of In-Memory Computing (IMC) is on the rise, alongside specific use case of In-Memory technology, and stream analytics. No doubt, In-memory computing is the latest paradigm for performance computing, but scaling memory-centric architectures and mega platforms is becoming a huge problem for SaaS, cloud providers, and enterprises that need to harness massive datasets.

Reaching the Limits of Traditional Scaling

Architecting DRAM-based computing clusters should be a straightforward exercise: Have enough DRAM pool capacity across your clusters to enable your data sets to fit. Except real-time data can easily grow and exceed the limits of physical DRAM pools. Scaling in-memory compute infrastructure is a real problem in the era of up-to-the-second decision making for larger and larger data sets. When data sets do exceed physical DRAM pools, application performance suffers. Data swapping/paging to a lower performance tier inhibits performance in an era where the competition may have larger DRAM pools and better performance than you. Further architectural complications are encountered when software sharding across numerous nodes and LIFO data cache purging techniques put unique stresses on scaling in-memory computing. As data sets grow, scaling In-Memory computing infrastructure has its barriers:

  • Cost of DIMM modules – are extrem high in relation with the capacity
  • Large Capacity DIMM pricing doesn’t scale linearly – relative to 64GB DIMM, pricing for 128GB is not linear; a price premium is commanded for the 128GB DIMM density
  • Limited DIMM slots available within servers – effective scaling may require more nodes than you anticipated because of the limited amount of available DIMM slots.
  • Low CPU utilization – scaling may cause individual nodes within a compute cluster to be filled with DIMMs, but CPU utilization can’t be optimized for application loads.

Cloud and IT architects face a Sisyphean-like task to keep scaling compute clusters to accommodate growth. Just as you may scale your in-memory compute clusters, data sets will grow and exceed DRAM pools or you may realize immense CAPEX and OPEX in order to scale.

But what if you could scale memory configurations without depending just on very expensive DRAM ?

New Possibilities for In-Memory Scaling

The Ultrastar DC ME200 combines one or more custom NVMe™ drives, tuned for low latency and high performance, with a software layer that expands system RAM onto them. Unmodified Linux operating systems using this technology can address system memory up to eight times the capacity of the DRAM installed in a server with near DRAM speeds. Memory-intensive, highly parallel applications such as Memcached can take advantage of this extra system RAM without any changes. For example, a 1U server with 256 GiB of DRAM installed can be extended to make use of up to 2 TiB of Memory Extension RAM.

Figure : The upper diagram depicts the traditional compute-memory-storage architecture, the Second digram depicts how Memcached configurations can take advantage of Ultrastar memory drives to augment server DRAM to create a virtualized memory pool to enable greater memory expansion.

Memory Expansion for the Data Center

They can be used to scale existing system memory, promote server consolidation, and reduce the complexity of splitting large multi-TB data sets across multiple servers. Provide your applications with large amounts of system memory at a fraction of the cost of DRAM without changes to your OS and application stacks. Western Digitals advanced software algorithms are designed to maintain near DRAM-like performance across a variety of applications with its main target being highly parallel workloads with high numbers of transactions.

Dramatically Scale System Memory

Web application caching in particular requires large amounts of system memory to quickly ingest and analyze vast streams of data from Internet users, transaction events, and IoT devices. High concurrency environments, such as virtualized servers and container-based applications, are prime examples where memory usage can quickly outpace processing capabilities, requiring expensive additional scale-out servers to house the extra memory and virtual machines.


  • Ideal for Redis™ and Memcached
  • No need to modify application stacks or Linux
  • Ability to scale to large memory pool sizes
  • Reduce node count by consolidating with Ultrastar DC ME200
  • NVME™ U.2 or AIC HH-HL form factors
  • from 1TiB, 2TiB up to 4TiB Memory Capacity
  • Up to 24TiB in 2P servers (1U)
  • Up to 48TiB in 4P servers (2U)
  • Up to 96TiB in 8P servers (4U)

Applications and Workloads

  • Real-Time Analytics
  • Web-scale caching
  • High Performance Computing (HPC)
  • Genomics
  • Cloud Services
  • Hosting Provider
  • Telcos
  • Software-as-a-Service
  • IoT Time Series Analysis

Ultrastar® DC ME200 Memory Extension Drive with Memcached enables scalable in-memory caching for a better TCO

Mobile and web applications succeed or fail by their responsiveness. A shopping website whose product pages take half a second to generate is in danger of losing impatient customers to a quicker site. A mobile gaming world whose state updates take too long will suffer from “lag” and may be dropped by gamers seeking a more immersive and responsive gameplay experience.Memcached has been used to speed up these kinds of uses for over 15 years. It provides a simple, high performance means of updating and storing transient user state or caching results of heavyweight database processes. This has the double benefit of reducing latency for the end user while also minimizing the load on the backend database.

Memcached Server Sprawl

Memcached is a cache whose size can grow to the size of a system’s memory. The larger the cache, the higher its hit rate and effectiveness. Because the amount of DRAM that can economically be added to conventional servers is limited, large arrays of servers are often used to increase the total usable cache for an application. This poses two problems for data center architects: First, while it is really only the extra memory that Memcached needs, each additional server has significant non-memory costs such as the processors, power supplies, and motherboards. Second, the space, power, and cooling required by these additional servers and their DRAM can become a significant operational expense over time.

Avoiding Memcached Sprawl

By increasing the economically reasonable amount of effective RAM available per-server up to eight times, the Ultrastar DC ME200 Memory Extension Drive can help data center architects keep Memcached server sprawl to a minimum. This memory extension not only reduces the number of servers required, it also helps make better use of the remaining servers, by allowing the CPUs and other overhead components in any one server to be amortized over a larger amount of Memcached cache space.

Replacing DRAM with Ultrastar DC ME200 Memory Extension Drive

Western Digital has benchmarked the Memcached performance of Ultrastar memory in order to validate that it provides near-DRAM performance while expanding RAM capacity four or eight times at significant cost savings. Mirroring a typical Memcached use case, the testing consisted of high concurrency, small requests (1KB) with a 10:90 SET to GET ratio from several testing clients to the Memcached server.

The Memcached server was configured as a baseline using only physical DRAM to provide 768 GiB of system RAM. This baseline system then had its DRAM reduced by three-quarters, to only 192 GiB, while using Ultrastar memory to provide a combined total of 768 GiB of effective system RAM. Finally, the server had its physical memory reduced to a mere 96 GiB of DRAM while using Ultrastar memory to provide the remainder of the total 768 GiB of system RAM, a reduction in DRAM usage by 87.5%.

As shown in the graph below, the Ultrastar memory enabled system was able to provide 85% of the performance of a full 768 GiB DRAM solution while only requiring 96 GiB of DRAM backed by the Ultrastar memory devices.Enabling Memcached servers with such capacities and such economical DRAM costs can minimize the number of Memcached servers required in a cluster.

Related Posts