According to DigiTimes report, Nvidia is interested in assessing SK Hynix’s HBM3E samples. If the information is correct, Nvidia’s next-generation compute GPU for AI and high-performance computing workloads could employ HBM3E memory instead of HBM3.
Nvidia has requested samples of HBM3E from SK Hynix, according to industry insiders cited by Korea’s Money Today and Seoul Economic Daily, to evaluate their influence on GPU performance.
The data transfer rate of SK Hynix’s forthcoming HBM3E memory will be increased from 6.40 GT/s to 8.0 GT/s. As a result, the per-stack bandwidth will increase from 819.2 GB/s to a stunning 1 TB/s. However, there are questions about the HBM3E’s compatibility with pre-existing HBM3 controllers and interfaces because SK Hynix has not yet released details on this part of the new technology. In any event, Nvidia and other computational AI and HPC GPU developers will need to evaluate technologies.
SK Hynix appears to plan to begin sampling its HBM3E memory in the latter part of 2023, with large-scale production beginning in late 2023 or 2024. SK Hynix intends to manufacture HBM3E memory utilising its 1b nanometer fabrication process, the company’s 5th generation 10nm-class DRAM node. DDR5-6400 DRAMs are now manufactured using the same fabrication method. LPDDR5T memory chips for high-performance, low-power applications will be manufactured using the same technology.
It is unclear which of Nvidia’s compute GPUs will employ HBM3E memory, but the company is expected to adopt the new form of memory in its next generation of processors, coming in 2024. Meanwhile, it’s unclear whether this will be a redesigned Hopper GH100 computational GPU or something else new.
Also Read: