Last month we saw SK Hynix confirming that it’s developing a new 24GB HBM3 memory with an immensely high bandwidth performance of 819GB/s per stack. This announcement was the introduction of their next-gen high-bandwidth memory and since next-gen CPUs and GPUs will require faster and stronger memory, HBM3 might be the answer which we need to support in the future.
And now, during the OCP Summit 2021, SK Hynix has officially released the details of their next-gen memory modules. JEDEC, the group “responsible for HBM3,” is yet to release the final specs on the new standard of memory modules, but, SK Hynix has published specifications from their initial tests.
These tests show us the speeds which the memory can achieve and the speeds are 5.2 Gbps to 6.4 Gbps. But, we still are unsure as to which one of the two speeds will be close to what will be globally produced for “next-gen accelerators.”
In the tests, it was stated that the 5.2 to 6.4 Gbps module featured a total of 12 stacks with each connected to a 1024-bit interface. We can expect that these new stacks can increase bandwidth speeds per stack, ranging from 461 GB/s to 819 GB/s.
HBM Memory Specifications Comparison
DRAM | HBM1 | HBM2 | HBM2e | HBM3 |
I/O (Bus Interface) | 1024 | 1024 | 1024 | 1024 |
Prefetch (I/O) | 2 | 2 | 2 | 2 |
Maximum Bandwidth | 128 GB/s | 256 GB/s | 460.8 GB/s | 819.2 GB/s |
DRAM ICs Per Stack | 4 | 8 | 8 | 12 |
Maximum Capacity | 4 GB | 8 GB | 16 GB | 24 GB |
tRC | 48ns | 45ns | 45ns | TBA |
tCCD | 2ns (=1tCK) | 2ns (=1tCK) | 2ns (=1tCK) | TBA |
VPP | External VPP | External VPP | External VPP | External VPP |
VDD | 1.2V | 1.2V | 1.2V | TBA |
Command Input | Dual Command | Dual Command | Dual Command | Dual Command |
In other news, AMD officially introduced us to its new Instinct MI250X accelerator on Monday, and the company is planning to offer a whopping 8 HBM2e stacks, with a clock speed as high as 3.2 Gbps. These stacks offer a total capacity of 16GBs which is equal to 128 GBs incapacity.