SK Hynix was the primary reminiscence vendor to start out speaking about HBM3 and was the primary firm full growth of reminiscence below that spec. Right this moment the corporate mentioned that it had begun to mass produce HBM3 reminiscence and these DRAMs will probably be utilized by Nvidia for its H100 compute GPUs and DGX H100 techniques that can ship within the third quarter.
SK Hynix’s HBM 3 recognized good stack dies (KGSDs) supply peak reminiscence bandwidth of 819 GB/s, which implies that they assist knowledge switch charges of as much as 6400 GT/s. As for capability, every stack packs eight 2GB DRAM gadgets for a complete of 16GB per package deal. SK Hynix additionally has 12-Hello 24GB KGSDs, however since Nvidia appears to be the corporate’s major buyer for HBM3, the corporate kicks off manufacturing with 8-Hello stacks.
The beginning of HBM3 mass manufacturing is sweet information for SK Hynix’s backside line; for some time, a minimum of, the corporate would be the solely provider of this reminiscence sort and can be capable to cost a hefty premium for these gadgets. What’s necessary for SK Hynix’s public picture is that it’s starting mass manufacturing of HBM3 forward of its arch-rival Samsung.
Ultimately, SK Hynix and different makers of reminiscence will supply HBM3 packages with as much as 16 32Gb DRAM gadgets and with capacities of 64GB per KGSD, however this can be a longer-term query.
Nvidia’s H100 compute GPU is provided with 96GB of HBM3 DRAM, although due to ECC assist and another components, customers can entry 80GB of ECC-enabled HBM3 reminiscence related utilizing a 5120-bit interface. To win the contract with Nvidia, SK Hynix has labored intently with the corporate to make sure excellent interoperability between the processor and reminiscence gadgets.
“We intention to turn out to be an answer supplier that deeply understands and addresses our prospects’ wants by steady open collaboration,” mentioned Kevin (Jongwon) Noh, president and chief advertising and marketing officer at SK Hynix.
However Nvidia won’t be the one firm to make use of HBM3 within the foreseeable future. SiFive taped out its first HBM3-supporting system-on-chip on TSMC’s N5 node a couple of yr in the past, so the corporate can supply comparable expertise to its purchasers. Moreover, Rambus and Synopsys have each provided silicon-proven HBM3 controllers and bodily interfaces for fairly some time and have landed quite a few prospects, so anticipate an arrival of assorted HBM3-supporting SoCs (primarily for AI and supercomputing purposes) within the coming quarters.