Launched in 2014, DDR4 delivers up to 1.5x performance improvement over DDR3, while reducing power by an impressive 25% on the memory interface. In addition, DDR4 supports a maximum capacity of 512 GB per module and features more banks than its DDR3 predecessor, along with significantly smaller row sizes, faster (bank) cycling and an increased pin count (284) to support higher addressing capability.

According to IC Insights, the DDR4 standard includes a number of features that accelerate memory operations and increase SDRAM storage in servers, notebook and desktop PCs, tablet computers, along with a wide range of consumer electronics. More specifically, DDR4 supports stacked memory chips with up to 8 devices presenting a single signal load to memory controllers. In fact, compared to DDR3, DDR4 offers the potential to double module density as well as speed, while lowering power consumption and extending battery life in future 64-bit tablets and smartphones. DDR4 is widely expected to become the dominant DRAM generation this year, with a 58% market share versus 39% for DDR3.

It gets better

While DDR4 clocks in at 1.6Gbps to 3.2Gbps, the next generation of DRAM - DDR5, recently announced by JEDEC, is slated to double the bandwidth thereby achieving achieve speeds of 3.2Gbps to 6.4Gbps to meet the growing bandwidth demands of data centers in the age of the Internet of Things (IoT). Indeed, a new age of pervasive connectivity has generated a digital tsunami of data that is pushing the limits of DDR4 capacity and memory bandwidth limits.

Although the DDR5 standards work is still in progress, early publicly released information shows a number of evolutionary improvements. These include higher density, a new command structure and new power saving features. DDR5 will also likely introduce signal equalization and error correction.

The introduction of high bandwidth memory (HBM) represents another approach to increasing server memory performance. Essentially, HBM bolsters local available memory by placing low-latency DRAM closer to the CPU.

Moreover, HBM DRAM increases memory bandwidth by providing a very wide interface to the SoC of 1024 bits. This means the maximum speed for HBM2 is 2Gbps for a total bandwidth of 256GB/s. Although the bit rate is similar to DDR3 at 2.1Gbps, the 8, 128-bit channels provide HBM with approximately 15X more bandwidth. In addition, four HBM memory stacks (for example), each with 256GB/s in close proximity to the CPU, provides both a significant increase in memory density (up to 8Bb per HBM) and bandwidth when compared with existing architectures.

In addition to DDR5 and HBM, hybrid DIMM technologies such as NVDIMM (non-volatile DIMMs) are being deployed to address the insatiable demand for increased memory capacity and bandwidth. According to JEDEC, NVDIMM-P will enable new memory solutions optimized for cost, power usage and performance as a new high capacity persistent memory module for computing systems. Indeed, persistent memory usage on the DIMM supports multiple use cases for hyperscale, high performance and high capacity data centers. These applications include latency reduction, power reduction, metadata storage, in-memory databases, software-defined server RAID, as well as reduced processing load during unexpected failures.

It should be noted that NVDIMM-P enables fully accessible flash on the DIMM, allowing the system to leverage non-volatile memory as an additional high-speed memory bank. Meanwhile, NVDIMM-N is designed to enable flash as a persistent memory backup to the DRAM. In practical terms, this means DRAM data is stored locally on the flash, which creates persistence (in case of a power outage) while reducing CPU load.

In conclusion, our increasingly connected world has spurred industry heavyweights to implement and innovate new architectures in the data center to remove bottlenecks. These include DDR4 and DDR5 buffer chips, second-generation high bandwidth memory (HBM2) and

hybrid DIMM technologies such as NVDIMM-P and NVDIMM-N. At Rambus, we believe the industry needs to work together on developing next generation memory solutions, while adhering to the goal of doubling current speeds with minimal changes.

John Eble is director of architecture and signaling at Rambus Labs and Frank Ferro is senior director of product management at Rambus.