Evaluating ReRAM technology choices for cloud and data center applications

Resistive random-access () is the next promising memory technology in the race to develop more scalable, high-capacity, high-performance, reliable solutions.

Resistive random access memory (ReRAM) is emerging as an alternative non-volatile memory (NVM) solution, particularly in and environments that require ever-increasing improvements in performance and energy efficiency[1]. As the demand for data grows from humans through premium services like video streaming and from machines through the Internet of Things (IoT), ReRAM technology has exhibited lower read latency and faster write performance than flash memories, while also achieving 64pJ/cell program energy that represents a 20 percent improvement over NAND.

In data center environments, 3D vertical ReRAM arrays provide high-performance memory subsystems capable of replacing traditional DRAM- or Flash-based SSDs to speed up data , storage, and retrieval in substantially smaller form factors with lower energy requirements. With ReRAM, sub-five nanosecond latencies are possible in an architecture that delivers 1GIOPs/U.

A typical ReRAM cell contains a switching material with different resistance characteristics sandwiched between two metallic electrodes. The switching effect of ReRAM is based on the motion of ions under the influence of an electrical field and the switching material’s ability to store the ion distribution. In turn, this causes a measurable change in the resistance of ReRAM devices, reducing the effects of dielectric breakdown that degrades memory component performance over time.

The most common challenges for ReRAM technology are temperature sensitivity, integration with standard CMOS technology and manufacturing process, and the selector mechanism of individual ReRAM cells. As such, designers take many different approaches to implementing ReRAM technology based on their switching material and memory cell organization of choice.

Taken in conjunction, these variables can result in significant performance differences for ReRAM technology. As such, four key areas that should be considered when evaluating ReRAM are:

  • Manufacturability
  • Performance
  • Density
  • Energy

Let’s take a closer look at each.


CMOS-friendly materials and standard manufacturing processes are preferred when manufacturing ReRAM devices, as it allows the technology to be easily integrated between two metal lines, directly connected to CMOS IP logic blocks, and produced in existing fabs without the need for specialized equipment or materials (Figure 1). As ReRAM is a low temperature, back-end-of-line (BEOL) process integration, multiple layers of ReRAM arrays can be integrated on top of CMOS logic wafers to build a 3D ReRAM storage chip. This enables extremely integrated solutions comprised of on-chip NVM, processing cores, and subsystems on a single die in an elegant and low-cost solution.

[Figure 1 | ReRAM manufacturing using standard CMOS processes.]

Compared to electron storage in a Flash memory cell where a few electron losses cause reliability, retention, and cycling issues that lead to degradation, Crossbar’s ReRAM cell operation is based on a metallic filament in a non-conductive layer. Crossbar’s ReRAM scaling does not impact the device performance and has the potential for sub-10 nm scaling.

[Figure 2 | The cell operation of Crossbar ReRAM allows the technology to scale to sub-10 nm processes without degradation.]


In terms of program operations, current MLC/TLC NAND or 3D NAND flash requires about 600 µs to 1 ms to program 8 to 16 KB pages and around 10 ms for large block, 4 to 8 MB pages.

NAND flash must also be erased prior to being programmed. Garbage collection is an additional layer of data management in NAND flash that is required to properly free-up blocks with obsolete data when storage is idle. This creates problems when a new request is received while garbage collection moves data from block to block, introducing long and undetermnistic latencies in the range of seconds. As a result, SSD writes generally consist of writing data more than once between the SSD controller, NAND Flash, and DRAM components, initially when saving the data, and later when moving valid data during multiple garbage collection cycles. It’s therefore a common occurrence for more data to be written to an SSD’s flash memory than was originally issued by the host system. This disparity is known as write amplification (WA).

WA is undesirable because it means that more data is being written to the media, increasing wear and negatively impacting performance by consuming bandwidth that would otherwise be reserved for operation of the flash memory’s intended functions. This is especially relevant at smaller process nodes, where the maximum cycle of a NAND memory cell decreases to below 3,000 program cycles.

Conversely, ReRAM is uses bit-alterable, erase-free operation, delivering 100x lower read latency and 1000x faster write performance compared to NAND Flash without the constraints of building large-block memory arrays. The ability of ReRAM to conduct independent atomic operations allows it to be architected into smaller pages (e.g. 256 B pages vs. 16 KB pages in NAND), each of which can be individually reprogrammed. This type of architecture eases the burden on storage controllers by removing large portions of background memory that would typically be accessed during garbage collection. Where NAND flash systems typically have WA scores in the three to four range, the characteristics of ReRAM enable WAs equal to one. This benefits the read and write latencies, , and lifetime of storage solutions.

Next-generation SSD controllers optimized for ReRAM will be able to update smaller pages faster, further reducing the background memory operations associated with NAND and providing lower, more deterministic read latencies on the order of tens of µs.


Reducing the number of background memory operations can improve the performance and overall endurance of a data storage solution, but also reduces overall power consumption of the storage controller, DRAM usage, and the read and write power budget consumed by the data storage components.


One technical challenges faced by high-density ReRAM is sneak (or leakage) current. This can be mitigated using a selector device with 1 TnR memory cell arrays, which make it possible for a single transistor to manage a large number of interconnected memory cells. This enables high-capacity solid-state storage.

While 1 TnR enables a single transistor to drive over 2,000 memory cells with low power, it also prompts the leakage of a sneak path current, which interferes with the performance and reliability of the ReRAM array. Crossbar’s field assisted superlinear threshold device is capable of suppressing the leakage current below 0.1 nA, and has been successfully demonstrated in a 4 Mb, 3D stackable passive integrated array. It achieves the highest reported selectivity of 10^10, as well as an extremely sharp turn-on slope of less than 5 mV/dec, fast turn-on and recovery (<50 ns), greater than 100 M cycle endurance, and a processing temperature less than 300 °C.

Faster, more efficient storage for the cloud and data center

ReRAM technology enables next-generation enterprise storage through faster, denser, and ultra-low latency solutions capable of serving increasing data demands. As energy usage and longevity become key total cost of ownership (TCO) metrics in cloud and data center environments, advances in ReRAM and increased volumes will continue to drive the ReRAM value proposition.

Sylvain Dubois is Vice President of Strategic Marketing and Business Development at Crossbar.

Crossbar Inc.




LinkedIn: www.linkedin.com/company-beta/1059471

Google+: plus.google.com/+Crossbar-inc

YouTube: www.youtube.com/user/crossbarinc


1. “Data Centers will need faster storage system beyond NAND.” Simmtester.com. Accessed June 22, 2017. http://www.simmtester.com/Page/news/shownews.asp?num=18781