Blog
LPDDR6 update for non-mobile AI platforms from Jedec| Electronics Weekly

The relevant subcommittee has been working on extending LPDDR6 beyond mobile platforms. For example, to support selected data centre and accelerated computing workloads.
LPDDR6 update
More specifically, there are four areas of change.
First, it previews a narrower per-die interface (x6) to enable higher capacities. Jedec explains that the move to a non-binary interface width – from x16 to x24, the inclusion of x12 and an additional x6 sub-channel mode – allows more die per package. And also higher memory capacities per component and per channel, which it describes a critical enabler for AI-scale memory footprints.
Second, it gives data center customers the option to balance user capacity and metadata needs according to their own specific reliability requirements.
Third, 512GB density is on the horizon. Note that LPDDR6 device density originally ranges from 4Gb to 64Gb. The goal is to unlock densities beyond the current LPDDR5/5X maximum. Again, this will address the ever-growing memory capacity requirements of AI training and inference workloads.
Finally, there is LPDDR6 SOCAMM2 module standard in development. JEDEC says it is actively working on an LPDDR6-based SOCAMM2 module standard. This is being designed “to carry the compact, serviceable module form factor forward and offer a clear upgrade path from today’s LPDDR5X SOCAMM2 modules.”
LPDDR6
The LPDDR6 update builds on the official standard announced LPDDR6 back in July last year. This standard included features, functionalities, AC and DC characteristics, packages, and ball/signal assignments.
Basically, it aims to define the minimum set of requirements for a Jedec-compliant x24 one channel SDRAM device.
LPDDR6 PIM
Note that Jedec highlights that a LPDDR6 PIM (processing‑inmMemory) standard is in development. It is also nearing completion of a standard for LPDDR6 Processing‑in‑Memory (LPDDR6 PIM) technology,
This is described by the standards body as “a next‑generation memory solution intended to address the rapidly increasing performance and energy‑efficiency requirements of edge and data‑center inference workloads”.
“By integrating processing capability directly within LPDDR6 memory, LPDDR6 PIM reduces data movement between memory and compute, enabling higher inference performance and lower power consumption while maintaining the efficiency advantages of LPDDR‑based designs.”
Image: Samsung – Edge AI
See all our Jedec content.











