What designers can expect with DDR4 SDRAM

Josh Lee of Uniquify sets the clock on DDR timing requirements in memory subsystems.

1Double Data Rate type four Synchronous Dynamic Random-Access Memory (DDR4 SDRAM) promises to provide a higher range of clock frequencies and data transfer rates over DDR3 plus other performance enhancements. But along with these benefits come a number of design challenges including narrower timing windows and problems with optimal system yield and field reliability. Josh explains how designers can address these stringent timing requirements and resolve static and dynamic variation issues during system operation in DDR memory subsystems.


ECD: The final ratified spec for DDR4 SDRAM is due any day now. What’s in store for designers as it becomes more widely available?

LEE: Let’s start with a few definitions. DDR SDRAM is a class of memory ICs. When we refer to the DDR memory subsystem, we are referring to the host System-on-Chip (SoC) controlling and accessing the DDR memory, the interface and interconnect between the host and DDR memory devices, which is typically a PCB, and the DDR SDRAM device itself.

DDR4 is the latest DRAM interface specification from the JEDEC standards organization. When the final ratified spec becomes available later this year, designers will note that DDR4 offers a higher range of clock frequencies and data transfer rates over DDR3, as well as lower voltage.

The topology is expected to be changed as well. For example, DDR4 will not have multiple Dual In-line Memory Modules (DIMMs) per channel. Instead, designers should expect a point-to-point topology where each channel in the memory controller is connected to a single DIMM. Another change will be switched memory banks for servers.

It should be no surprise that the enhanced performance promised by DDR4 will bring more stringent design requirements. In particular, the total timing window – that is, the sum of timing margins in the DDR SDRAM, PCB, interconnect, host SoC, and package – has shrunk dramatically from 2.5 nanoseconds in DDR1 to only 312 picoseconds in DDR4 (see Table 1). This narrow margin means that designers need to be exacting about the implementation of the DDR memory subsystem so that it operates error-free over a range of operating conditions. The DDR memory subsystem also must accommodate the inherent variability in all of its components.

21
Table 1: Each generation of the DDR SDRAM interface specification from JEDEC has different clock speeds and timing margins.
(Click graphic to zoom by 1.9x)

ECD: With IP being pervasive in SoC designs, how important is a DDR SDRAM memory controller subsystem IP?

LEE: DDR is ubiquitous in today’s modern electronic products. If it has a processor in it, it probably has DDR memory as well. DDR is used in products ranging from automobiles to mobile devices, laptops, printers, and home entertainment systems. In most systems, the DDR memory interface is the fastest bus in the system. This also implies that it is the most critical area of the design when it comes to system yield and field reliability.

DDR SDRAM devices are manufactured using leading-edge processes to meet stringent timing specifications. Controlling the DDR memory subsystem and ensuring that timing margins are adhered to are the duties of the DDR subsystem, part of the host SoC. Rather than design from scratch, most designers opt to use DDR subsystem IP that is already qualified, saving time and money and reducing the risk of yield problems and field reliability.

DDR IP design is critical. Managing delays and skews is imperative, and DDR IP layout has a direct effect on this. For example, the layout of Uniquify’s DDR PHY is aligned directly to the host SoC’s pad frame to eliminate any skew effects due to the physical layout. While this was not as important in the early versions of DDR, it is crucial to DDR3 and DDR4 with their much narrower timing windows.

ECD: System yield and reliability have been considered fundamental DDR problems. How is the more stringent timing of DDR4 going to affect these factors? What can designers do to mitigate these issues?

LEE: DDR memory chips need to be fast and small to keep costs down, which means that the onboard timing interface must be as simple as possible. Memory suppliers offer timing specs and some special-purpose registers to assist in calibrating the timing. However, the main job of controlling the timing interface is left up to the DDR memory control subsystem that is part of the host SoC.

Deep submicron SoC designs integrate DDR memory subsystems that operate at multi-GHz clock rates, resulting in read-write timing margins measured in picoseconds. Designers need to be concerned about both static variations due to process and dynamic variations due to system operating conditions such as temperature and voltage.

Designing the DDR memory subsystem to accommodate static and dynamic variations in system-level timing parameters during read and write cycles is challenging. No two systems behave exactly alike, which means that the engineering team needs to calibrate the DDR timing interface to accommodate differences in timing behavior between the host SoC, the board and package interface, and the DDR memory.

Satisfying these critical timing requirements can demand exhaustive rounds of incremental system-level parameter tuning. This is accomplished by manually working through a number of samples and picking a set of calibration points broad enough to cover the expected timing variations. This manual process can take several weeks, often affecting the project schedule. Even then, the resulting silicon often fails to deliver optimal system yield in volume production or exhibits reliability problems in the field.

Uniquify took a different view of the problem and developed technology for the DDR memory controller IP that automatically and precisely measures in-system DDR subsystem timing and makes on-the-fly adjustments to keep the timing interface operating optimally.

ECD: How does this variation affect an SoC’s DDR interface? What components of variation do designers need to contend with?

LEE: Designers need to be concerned with static variations caused by process and dynamic variations due to system operating conditions. For example, consider the SoC die itself. Because of imperfections in the manufacturing process, no two SoCs will be exactly alike. Similarly, no two DDR SDRAMs will be exactly alike, nor will two PCBs. Moreover, all of these components are susceptible to variations caused by shifting operating conditions including temperature and voltage.

Now consider that each system has a collection of components with no way to predict in advance where each component lies on the distribution curve. As a result, designers are forced to design with a one-size-fits-all approach to accommodate the expected worst-case distributions of all components in the DDR memory subsystem.

ECD: How can designers solve static and dynamic variation problems during system operation in DDR memory subsystems?

LEE: Uniquify’s DDR IP incorporates Self-Calibrating Logic (SCL) and Dynamic Self-Calibrating Logic (DSCL) for real-time calibration to accommodate both static and dynamic variations in the system operating environment. These innovative technologies precisely measure DDR memory timing within each system and use that information to adjust memory timing on-the-fly.

SCL is applied at system power-on and is used to compensate for static variations. DSCL runs during system operation and is used to mitigate dynamic variations due to system operating environments such as fluctuating temperature and voltage. SCL and DSCL help enhance device and system yield and reliability, reducing the effects of variation and maintaining DDR memory system performance as operating conditions fluctuate during system operation.

Josh Lee is CEO and president of Uniquify.

Uniquify info@uniquify.com www.uniquify.com

Follow: Twitter Facebook LinkedIn