Highlights
- Samsung begins HBM4 mass production in 2026, targeting AI accelerators and data centers.
- New memory delivers multi-terabyte bandwidth, 3D stacking, and major speed gains over HBM3E.
- Competition intensifies with SK Hynix and Micron in the next-gen AI memory supply.
- Samsung expects strong HBM revenue growth and future HBM4E sampling in late 2026.
HBM4 is a form of 3D-stacked DRAM that uses a far greater degree of internal bandwidth than conventional types of memory, such as DDR or GDDR.
It offers a 2048-bit interface with a transfer rate of more than 5 times the speed of a standard DDR memory, thus making it ideal for AI workloads.
HBM4 Commercialization: Performance Scaling and Market Implications in the AI Era
Samsung unveiled its new HBM4 memory chips, providing an unprecedented maximum of 8GBps bandwidth per pin, yielding up to 2 Terabytes per second in total bandwidth across fully configured stacks. Samsung’s current HBM4 chips are capable of up to 11.7Gbps per pin, with a future probability of 13Gbps with optimizations applied. This represents up to 22 percent over HBM3E and will help to eliminate delays in data bottlenecks associated with high-performance AI computers and supercomputing systems.
Samsung’s HBM4 memory utilizes 12-layer vertical stacking, providing individual stacks with capacities of between approximately 24 and 36GB of installed memory per stack. Future iterations of the HBM4 are planned to utilize 16-layer vertical stacking with increased capacities of as high as 48GB in a single stack, thus accommodating ever-growing demands for high-performance computing. In addition to increased speed, ongoing development will focus on improved power efficiency and thermal performance through improved power and packaging designs, which will help to resolve some of the challenges associated with high-density memory stacking.

Liam Briese/Unsplash
Samsung Initiates Mass Production of HBM4: Advancing High-Bandwidth Memory for Systems
Samsung’s announcement regarding its next-generation HBM4 memory comes at a time when other top-tier manufacturers, such as SK Hynix and Micron, are also vying to provide HBM4 memory to AI clients. Both SK Hynix and Micron achieved their respective volume production milestones prior to Samsung beginning mass production of HBM4. All three manufacturers are vying for market share in the rapidly expanding market for memory required by large AI models, both in terms of training and inference, as they are dependent upon having available massive amounts of memory bandwidth due to the very nature of processing large volumes of data in parallel.
The announcement by Samsung about the state of next-gen AI hardware has resulted in a positive response from the market, as illustrated by an increase in the company’s stock price, indicating that investors are confident in its memory solutions within the AI infrastructure space.
Going forward, Samsung estimates that its high-bandwidth memory (HBM) revenue will increase by more than three times in 2026 compared to 2025 due to high levels of production and demand.
Samsung also plans to sample HBM4E (its next generation) in the second half of 2026 with even greater performance improvement.

The successful production of HBM4 chips is indicative of a larger trend pertaining to semiconductors, with increased focus on investment towards providing high-performance memory systems to meet the requirements of generative AI, machine learning, and high-performance computing. Additionally, it reflects a shift in the global supply chain for memory into high-bandwidth memory (HBM) solutions that satisfy the evolving workload needs.
Samsung’s transition from HBM4 development to mass production and shipping of HBM4 chips represents a critical technology development milestone within next-generation AI hardware. Through improved bandwidth, improved efficiency, and increased production, HBM4 platforms are well-positioned to become a dominant component of leading-edge AI accelerators and data center architecture. The intense competition between major memory manufacturers highlights the strategic value of HBM in the memory market.
The announcement by Samsung about the state of next-gen AI hardware has resulted in a positive response from the market, as illustrated by an increase in the company’s stock price, indicating that investors are confident in its memory solutions within the AI infrastructure space.
Going forward, Samsung estimates that its high-bandwidth memory (HBM) revenue will increase by more than three times in 2026 compared to 2025 due to high levels of production and demand.
Samsung also plans to sample HBM4E (its next generation) in the second half of 2026 with even greater performance improvement.

Sama Hosseini/Unsplash
Final Thoughts
The successful production of HBM4 chips is indicative of a larger trend pertaining to semiconductors, with increased focus on investment towards providing high-performance memory systems to meet the requirements of generative AI, machine learning, and high-performance computing. Additionally, it reflects a shift in the global supply chain for memory into high-bandwidth memory (HBM) solutions that satisfy the evolving workload needs.
Samsung’s transition from HBM4 development to mass production and shipping of HBM4 chips represents a critical technology development milestone within next-generation AI hardware. Through improved bandwidth, improved efficiency, and increased production, HBM4 platforms are well-positioned to become a dominant component of leading-edge AI accelerators and data center architecture. The intense competition between major memory manufacturers highlights the strategic value of HBM in the memory market.