SK hynix HBM4: Next-Gen AI Memory Unveiled

by priyanka.patel tech editor

SK hynix Unveils Next-Generation Memory Solutions to Power the AI and HPC Revolution

SK hynix is spearheading advancements in memory technology, showcasing a comprehensive portfolio of solutions designed to accelerate the rapidly converging worlds of artificial intelligence and high-performance computing (HPC) at Supercomputing 2025 (SC25), held in St. Louis from November 16–21. The company’s presence at SC25, the world’s largest global HPC conference held annually since 1988, underscored its commitment to becoming a leading “full-stack AI memory creator.”

At the event, themed “Memory, Powering AI and Tomorrow,” SK hynix presented its innovative memory lineup and a forward-looking technological vision aimed at dramatically improving data analysis in modern computing systems. Experts from industry, academia, and research institutions gathered at SC25 to explore the latest trends and foster collaboration, with the synergy between AI and HPC taking center stage.

Pioneering Products for AI and HPC Workloads

SK hynix highlighted its core product offerings – High Bandwidth Memory (HBM), DRAM, and enterprise Solid-State Drives (eSSDs) – demonstrating their capabilities within real-world AI and HPC environments. These demonstrations showcased the company’s technological prowess and commitment to delivering cutting-edge solutions.

HBM, a high-value, high-performance memory, vertically interconnects multiple DRAM chips to significantly increase capacity and data processing speeds. SK hynix is currently developing six generations of HBM, from the original to the forthcoming HBM4. The company unveiled the 12-layer HBM4, developed as a world-first in September 2025, boasting 2,048 input/output (I/O) channels – double the previous generation – and a greater than 40% improvement in power efficiency. According to a company release, HBM4 is ideally suited for ultra-high-performance AI computing systems.

Alongside HBM4, SK hynix showcased the 12-layer HBM3E, currently the highest-performing commercialized HBM available, integrated with NVIDIA’s next-generation GB300 Grace™ Blackwell GPU.

Advancing DRAM Performance with DDR5

The DRAM section featured next-generation server modules based on DDR5 technology. These included RDIMM and MRDIMM products leveraging the 1c node – the sixth generation of the 10nm process technology – as well as 256 GB 3DS DDR5 RDIMM and 256 GB DDR5 Tall MRDIMM. These solutions are engineered to deliver faster speeds and improved power efficiency, ensuring stable operation in demanding server and data center environments.

Registered Dual In-line Memory Module (RDIMM) is a server memory module with multiple DRAM chips, while Multiplexed Rank Dual In-line Memory Module (MRDIMM) enhances speed by simultaneously operating two ranks. The 1c node represents the latest advancement in the 10nm process technology, building upon previous generations including 1x, 1y, 1z, 1a, and 1b. 3D Stacked Memory (3DS) further enhances performance by interconnecting multiple DRAM chips using TSV technology.

High-Capacity eSSDs for Enterprise Storage

SK hynix also presented a range of high-capacity and high-performance eSSDs, including the PS1010 E3.S and PE9010 M.2, both based on 176-layer 4D NAND, alongside the PEB110 E1.S built on 238-layer NAND. The portfolio also included the PS1012 U.2, utilizing QLC NAND, and the 245 TB PS1101 E3.L, built on the industry’s highest 321-layer QLC NAND.

These eSSDs support rapid data processing through PCIe 4.0 and 5.0 interfaces. The company also showcased the SE5110, utilizing the SATA3 interface for entry-level servers and PCs. Quad-level cell (QLC) NAND flash stores four data bits per cell, offering high density alongside single-level cell (SLC), multi-level cell (MLC), and triple-level cell (TLC) options.

Demonstrating Future Technologies with CXL and OASIS

Beyond product showcases, SK hynix demonstrated next-generation solutions poised to shape future technologies. A heterogeneous memory system, composed of CXL Memory Module-DDR5 (CMM-DDR5) and MRDIMM, was demonstrated in collaboration with Montage Technology, highlighting improved system performance and memory capacity scalability. Compute Express Link (CXL) efficiently connects memory, processors, and other components, expanding bandwidth and capacity.

Another demonstration featured the CXL Memory Module Accelerator (CMM-Ax), integrating compute capabilities into memory and showcasing its potential with Meta’s vector search engine, Faiss. The successful implementation of CMM-Ax within SK Telecom’s Petasus AI Cloud further validated its promise for future AI infrastructure. Facebook AI Similarity Search (FAISS) is a vector search engine that understands the meaning and context of queries, delivering highly relevant results.

SK hynix also demonstrated a memory-centric AI machine based on CXL Pooled Memory, connecting multiple servers and GPUs without a network to support distributed inference tasks for large language models (LLMs). CXL Pooled Memory allows multiple hosts to share memory capacity and data, improving efficiency.

Furthermore, the company showcased OASIS (Object-based Analytics Storage for Intelligent SQL Query Offloading), a next-generation storage system based on Data-Aware Computational storage drive (CSD). Applied to HPC applications developed by Los Alamos National Laboratory, OASIS significantly improved data analysis performance. The Optimizer Offloading SSD maximizes GPU efficiency by performing optimizer computations directly within the storage during AI training.

A Vision for Data Analysis Efficiency

During a presentation at SC25, SK hynix Technical Leader Soonyeal Yang of Solution SW outlined the company’s strategy for improving data analysis efficiency and advancing storage innovation in HPC environments. “Inefficiencies in the I/O channels during HPC-based data analysis cause overall system performance degradation and increased costs,” Yang stated. He emphasized that OASIS will distribute computational loads effectively, contributing to significant system optimization.

At SC25, SK hynix demonstrated not only future-oriented solutions but also its overarching vision for the evolving AI infrastructure landscape. The company remains committed to proactive innovation and close collaboration with global customers, aiming to become a leading provider of full-stack AI memory solutions and drive advancements in the AI and HPC sectors.

Leave a Comment