According to TrendForce, next-generation HBM3 and HBM3e memory will dominate the AI GPUs sector, especially given the large surge in desire from companies to incorporate DRAM. The current NVIDIA A100 and H100 AI GPUs are powered by HBM2e and HBM3 memory, which launched in 2018 and 2020, respectively. Several manufacturers, including Micron, SK Hynix, and Samsung, are swiftly establishing mass production facilities for new and faster HBM3 memory, and it won’t be long before it becomes the new standard.
Many individuals are unaware of a general aspect concerning the next-gen memory. HBM3 will be released in a variety of flavours, according to TrendForce. The lower-end HBM3 is expected to run at 5.6 to 6.4 Gbps, with higher variants exceeding 8 Gbps.
Future AI GPUs, such as the AMD MI300 Instinct GPUs and NVIDIA’s H100, are slated to use the next-generation HBM3 process, which SK Hynix has an advantage over because it has already entered manufacturing stages and has received a sample request from NVIDIA.
Micron has also just disclosed plans for their future HBM4 memory design, but that isn’t due until 2026, therefore the planned NVIDIA Blackwell GPUs dubbed “GB100” will most likely use the faster variations when they arrive next between 2024-2025. The memory, which uses 5th generation process technology (10nm), is projected to enter mass production in the first half of 2024, with both Samsung and SK Hynix ramping up.
Competitors such as Samsung and Micron are pushing the accelerator, according to multiple sources. According to reports, Samsung offered that NVIDIA handle both wafer and memory acquisition through its subsidiaries. Simultaneously, Micron has reportedly partnered with TSMC to supply memory for NVIDIA’s AI GPUs.
Also Read: