The bottleneck for running large AI models has never been processing power alone. It has been memory, specifically how fast data can move between storage and the processor doing the work. South Korea's SK hynix just started mass-producing a memory module designed to attack that problem directly.

The product is a 192GB SOCAMM2 module built on SK hynix's sixth-generation 10nm-class LPDDR5X DRAM. The target platform: Nvidia's Vera Rubin, the successor to the Blackwell architecture that currently powers most of the world's AI data centers.

The specs

The SOCAMM2 form factor was built specifically for AI accelerators, not adapted from existing server memory. SK hynix says it delivers more than double the bandwidth of conventional RDIMM modules and over 75% better power efficiency. For data center operators paying enormous electricity bills to run AI workloads, that efficiency gain translates directly into lower cost per inference.

Kim Joo-sun, president of SK hynix's AI infrastructure division, put it plainly: "192GB SOCAMM2 sets a new standard for AI memory performance."

Why it matters

SK hynix is the world's second-largest memory chipmaker and already Nvidia's primary supplier of HBM (High Bandwidth Memory), the specialized chips stacked inside every H100 and B200 GPU. That relationship has turned SK hynix into one of the biggest beneficiaries of the AI boom, with its stock price roughly tripling since early 2023.

Moving into SOCAMM2 production for Vera Rubin deepens the dependency in both directions. Nvidia needs SK hynix to deliver enough memory at the right specs. SK hynix needs Nvidia to keep winning the AI accelerator market. For now, both sides of that bet look solid.

Sources: Yonhap News

Daily Newsletter

Get These Insights Every Morning

Join 18,000+ professionals who start their day with Asiabits. Free, every weekday, straight from Shanghai.

Subscribe Free