HBM3 memory will be the main driving force in next-gen GPUs

More From Author

See more articles

India CEOs Dominating Global Companies: From Microsoft to Chanel...

India-origin executives are redefining global leadership, with 226 leaders of Indian origin now heading the world's most...

Full Form IT in 2025: What are the best...

Full Form IT: The Best Guide IT stands for information technology in its entire form. Computers are utilised...

Full Form of ITI: What does it mean in...

Full Form of ITI: Here's everything to know about ITI Full Form of ITI: ITI is an abbreviation...

According to TrendForce, next-generation HBM3 and HBM3e memory will dominate the AI GPUs sector, especially given the large surge in desire from companies to incorporate DRAM. The current NVIDIA A100 and H100 AI GPUs are powered by HBM2e and HBM3 memory, which launched in 2018 and 2020, respectively. Several manufacturers, including Micron, SK Hynix, and Samsung, are swiftly establishing mass production facilities for new and faster HBM3 memory, and it won’t be long before it becomes the new standard.

Many individuals are unaware of a general aspect concerning the next-gen memory. HBM3 will be released in a variety of flavours, according to TrendForce. The lower-end HBM3 is expected to run at 5.6 to 6.4 Gbps, with higher variants exceeding 8 Gbps.

HBM3
credit: wccftech

Future AI GPUs, such as the AMD MI300 Instinct GPUs and NVIDIA’s H100, are slated to use the next-generation HBM3 process, which SK Hynix has an advantage over because it has already entered manufacturing stages and has received a sample request from NVIDIA.

Micron has also just disclosed plans for their future HBM4 memory design, but that isn’t due until 2026, therefore the planned NVIDIA Blackwell GPUs dubbed “GB100” will most likely use the faster variations when they arrive next between 2024-2025. The memory, which uses 5th generation process technology (10nm), is projected to enter mass production in the first half of 2024, with both Samsung and SK Hynix ramping up.

Competitors such as Samsung and Micron are pushing the accelerator, according to multiple sources. According to reports, Samsung offered that NVIDIA handle both wafer and memory acquisition through its subsidiaries. Simultaneously, Micron has reportedly partnered with TSMC to supply memory for NVIDIA’s AI GPUs.

Also Read:

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

━ Related News

Featured

━ Latest News

Featured