AMD may fully embrace HBM, CPU and GPU?
Время обновления: 2021-07-27 11:39:04
Rumors have recently broken out on the extranet that AMD's next-generation Zen 4-core EPYC Genoa processor may be equipped with HBM content in order to compete with Intel's next-generation server CPU, Xeon Sapphire Rapids. Coincidentally, a recent Linux kernel patch also revealed that AMD's next-generation CDNA 2-core-based Instinct MI200 GPU will also use HBM2e, with up to 128GB of video memory, which means AMD is likely to fully embrace HBM in the server market.
HBM is no longer relevant to the consumer graphics market
HBM, a high-bandwidth memory, was actually the first high-performance DRAM to be developed by AMD. To achieve this vision, AMD brought in SK Hynix, which has experience in 3D stacking processes, and with the help of interconnect and packaging vendors, jointly developed HBM memory.
Fiji GPU / AMD
AMD was also the first to bring it to the GPU market and apply it to its Fiji GPUs. Then in 2016, Samsung was the first to begin mass production of HBM2, and AMD was robbed by NVIDIA, which was the first to apply this new standard of memory to its Tesla P100 accelerator card.
At that time, the advantages and disadvantages of HBM were obvious, the bandwidth advantage from the beginning was being caught up by GDDR6, and the design difficulty and cost was a difficult hurdle to overcome. Although these costs do not account for the bulk of high-end graphics cards, but the low-end graphics cards used HBM is more meat pain. But AMD did not give up HBM2, but continued to introduce HBM2 on the Vega graphics card.
However, this may be the last time we see HBM on the consumer GPU, AMD in the subsequent RDNA architecture never use HBM, only the CDNA architecture based and used for gas pedal GPU is still using HBM.
Why the server market?
How did HBM take root in the server market? This is because one of the most suitable application scenarios for HBM is in power-constrained environments where maximum bandwidth is required, perfectly suited for artificial intelligence computing in HPC clusters, or for large, dense computing data centers.
A100 Memory Comparison by Size / Nvidia
That's why these companies with data center operations continue to use HBM, and Nvidia is still using HBM2 and HBM2e in its powerful server GPU, the A100, and may even continue to do so in its next-generation Hopper architecture. Intel's yet-to-be-released Xe-HP and Xe-HPC GPUs are also rumored to use HBM.
However, both manufacturers' consumer GPUs have coincidentally avoided HBM and opted for GDDR6 and GDDR6X, so you can imagine that they don't want to take AMD's detour.
AMD Patents / AMD
As for AMD's pioneering use of HBM on CPUs, it is not an idle idea, in a patent published by AMD last year, HBM appeared in the chip design. Intel's competing Xeon Sapphire Rapids server CPUs were also officially announced to use HBM, but mass production will have to wait until 2023. These show how "fragrant" HBM is in the server market, and they are all starting to move HBM to CPUs.
Next Generation HBM
Although JEDEC, which sets the standards, has not yet released the specifications for HBM3, SK Hynix, which has been working on the next generation of HBM, revealed in June this year the latest information on HBM3, which will usher in further performance improvements.
HBM2E and HBM3 performance comparison / SK Hynix
SK Hynix was able to achieve such a high performance increase, most likely due to the patent licensing agreements it signed with Xperi last year. These agreements include the DBI Ultra 2.5D/3D interconnect technology, which can be used for the innovative development of 3DS, HBM2, HBM3 and subsequent DRAM products. While traditional copper pillar interconnects can only achieve 625 interconnects per square millimeter, DBI Ultra can achieve 100,000 interconnects in the same area.
Since JEDEC announced the HBM2E standard in 2018, HBM has not been updated for nearly 3 years. Samsung even announced the development of HBM-PIM with artificial intelligence engine in February this year. Whether HBM3 can continue to dominate in the server field in the future, I believe that the current percentage of HBM in the server products planned by several major manufacturers has given the answer.
Предыдущий: How does Verilog implement low-power designs?
Следующий: Comparison of Verilog HDL and VHDL Languages