According to a TechInsights report, HBM is a 3D-stacked DRAM device that offers high bandwidth and wide channels, making it ideal for applications requiring high performance, energy efficiency, large capacity, and low latency. These applications include High-Performance Computing (HPC), high-performance GPUs, artificial intelligence, and data centers. TechInsights predicts that upcoming HBM4 devices (2025-2026) and HBM4E devices (2027-2028) will feature capacities of 48GB to 64GB, with 16-high stacks and bandwidths of 1.6TB/s or higher.
HBM technology has seen a rapid evolution in bandwidth, increasing from around 1 Gbps in HBM Gen1 and 2 Gbps in HBM Gen2 to 3.6 Gbps in HBM2E, 6.4 Gbps in HBM3, and 9.6 Gbps in HBM3E. For Gen1 and Gen2 HBM devices, SK Hynix employed the TC-NCF method for HBM DRAM chip stacking. For Gen3 and Gen4, they transitioned to the MR-MUF process. SK Hynix has further optimized these technologies and is now developing an advanced MR-MUF process for Gen5 to enhance thermal management. TechInsights anticipates that the upcoming Gen6 HBM4 devices might combine this process with emerging hybrid bonding techniques.
To address thermal dissipation challenges, HBM devices utilize TC-NCF and MR-MUF solutions. The TC-NCF method involves applying thin film material after each chip stacking, while the MR-MUF method interconnects all vertically stacked chips through a single heating and bonding process. For higher-stack HBM solutions, such as HBM4E, HBM5, and beyond, TechInsights suggests that new approaches, like hybrid bonding, may be required to address these challenges effectively.