This site may earn affiliate commissions from the links on this folio. Terms of utilize.

Samsung has appear that it's bringing new, avant-garde GDDR6 modules to market with higher capacities and clock rates. It's a step forwards for the state of GDDR memory equally a whole, with college data transfer rates (up to 16Gbps) and larger capacities (16Gb, or 2GB per dice). In other words, a system could at present field 8GB of RAM in just four GDDR6 chips while maintaining a respectable 256GB/s of retentiveness bandwidth. Obviously smaller dies will exist bachelor if companies want to deploy more memory channels with less VRAM per aqueduct, but the figures for the new RAM standard are respectable.

The chart below is from Micron, just it illustrates the standard difference between GDDR5X and GDDR6. Ability consumption may decrease a bit from existence built on newer nodes, and in that location'due south a dissimilar channel configuration, merely overall bandwidth should be an evolutionary gain.

GDDR5X-vs-GDDR6

The funny affair is, in one case upon a time it wasn't clear if GDDR6 would come to marketplace at all, at least equally a major GPU solution. HBM was supposed to calibration quickly into HBM2, and HBM2 was supposed to deploy in major markets relatively quickly. It wasn't unusual for AMD to exist the only company to deploy HBM; AMD and Nvidia had divide similarly over the use of GDDR4 a decade ago, with AMD adopting the engineering and Nvidia choosing to stick with GDDR3. Nvidia's decision to use a stopgap GDDR5X raised a few eyebrows, only HBM2 still seemed to exist the better long-term technology, especially when Nvidia deployed information technology first for its high-end GPUs and AMD was following suit with Vega.

Just at that place accept been consistent rumors all yr that HBM2'southward manufacturing difficulties are causing issues for anybody who adopts the tech, non just AMD. A report from Semiengineering suggests some reasons why. While the data bus in HBM2 is 1,024 bits wide, that'southward only the data transmission lines. Gene in ability, ground, and other signaling into the mix, and information technology'south more like 1,700 wires to each HBM2 die. The interposer isn't necessarily difficult to manufacture, just the design of the interposer is still challenging–companies have to remainder signal length, ability consumption, and cross talk. Managing rut flow in HBM stacks is also challenging–because each die is stacked on top of the other, you lot can wind up with lower dies becoming extremely hot, since they radiate heat upwards through the retentiveness stack.

4GB HBM2 - Samsung

Samsung's diagram for its own HBM2 design.

HBM2 may well remain in the upper end of the product stack, simply it seems no one has had much luck bringing it down market yet, or even in ensuring easy deployment. That's non because the technology is intrinsically bad–AMD's Vega gets some very real thermal headroom from HBM2–but a applied science that can't calibration hands downwards to lower markets is a applied science that's fundamentally limited in terms of its own appeal. Our argument is simple: If Nvidia deploys GDDR6 in its side by side generation of high-finish cards and AMD makes a similar move with whatsoever it uses to follow up the Polaris family (RX 560-RX 580), it'll be a sign both companies are all the same struggling to bring the technology to market.