HBM3 memory will be the main driving force in next-gen GPUs

More From Author

See more articles

Myntra Upcoming Sales 2025: Your Fashion Calendar for Maximum...

Myntra Upcoming Sales 2025 In the ever-evolving world of fashion e-commerce, Myntra continues to be India's go-to destination...

Dimensity 6020 vs Snapdragon 695: Mid-Range Chipset Battle

Dimensity 6020 vs Snapdragon 695: Qualcomm Snapdragon 695 5G (SD695) is a fast mid-range ARM-based SoC found...

My Jio Recharge Plans as of January 4,...

My Jio Recharge Plans: Since its establishment in 2016, Reliance Jio has made a remarkable impact on...

According to TrendForce, next-generation HBM3 and HBM3e memory will dominate the AI GPUs sector, especially given the large surge in desire from companies to incorporate DRAM. The current NVIDIA A100 and H100 AI GPUs are powered by HBM2e and HBM3 memory, which launched in 2018 and 2020, respectively. Several manufacturers, including Micron, SK Hynix, and Samsung, are swiftly establishing mass production facilities for new and faster HBM3 memory, and it won’t be long before it becomes the new standard.

Many individuals are unaware of a general aspect concerning the next-gen memory. HBM3 will be released in a variety of flavours, according to TrendForce. The lower-end HBM3 is expected to run at 5.6 to 6.4 Gbps, with higher variants exceeding 8 Gbps.

HBM3
credit: wccftech

Future AI GPUs, such as the AMD MI300 Instinct GPUs and NVIDIA’s H100, are slated to use the next-generation HBM3 process, which SK Hynix has an advantage over because it has already entered manufacturing stages and has received a sample request from NVIDIA.

Micron has also just disclosed plans for their future HBM4 memory design, but that isn’t due until 2026, therefore the planned NVIDIA Blackwell GPUs dubbed “GB100” will most likely use the faster variations when they arrive next between 2024-2025. The memory, which uses 5th generation process technology (10nm), is projected to enter mass production in the first half of 2024, with both Samsung and SK Hynix ramping up.

Competitors such as Samsung and Micron are pushing the accelerator, according to multiple sources. According to reports, Samsung offered that NVIDIA handle both wafer and memory acquisition through its subsidiaries. Simultaneously, Micron has reportedly partnered with TSMC to supply memory for NVIDIA’s AI GPUs.

Also Read:

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

━ Related News

Featured

━ Latest News

Featured