AMD’s HBM (High Bandwidth Memory) technology is a stacked DRAM approach that promises a major boost in bandwidth (performance), along with a reduction in...

AMD’s HBM (High Bandwidth Memory) technology is a stacked DRAM approach that promises a major boost in bandwidth (performance), along with a reduction in circuit board area real estate consumption as well as significantly lower power consumption. Introducing HBM, a new type of memory chip with low power consumption, ultra-wide communication lanes and a revolutionary new stacked configuration. HBM’s vertical stacking and fast information transfer open the door for truly exciting performance in innovative form factors.

 

High Bandwidth Memory

High Bandwidth Memory

HBM is a new type of CPU/GPU memory (“RAM”) that vertically stacks memory chips, like floors in a skyscraper. In doing so, it shortens your information commute. Those towers connect to the CPU or GPU through an ultra-fast interconnect called the “interposer.” Several stacks of HBM are plugged into the interposer alongside a CPU or GPU, and that assembled module connects to a circuit board.

Though these HBM stacks are not physically integrated with the CPU or GPU, they are so closely and quickly connected via the interposer that HBM’s characteristics are nearly indistinguishable from on-chip integrated RAM

Why you need HBM?

Beyond performance and power efficiency, HBM is also revolutionary in its ability to save space on a product. As gamers increasingly expect smaller and more powerful PCs, the elimination of bulky GDDR5 chips in favor of HBM can enable devices with exciting new form factors that pack a punch in a smaller size. Compared to GDDR5, HBM can fit the same amount of memory in 94% less space

  • GDDR5 can’t keep up with GPU performance growth: GDDR5’s rising power consumption may soon be great enough to actively stall the growth of graphics performance.
  • GDDR5 limits form factors: A large number of GDDR5 chips are required to reach high bandwidth. Larger voltage circuitry is also required. This determines the size of a high-performance product.
  • On-chip integration not ideal for everything: Technologies like NAND, DRAM and Optics would benefit from on-chip integration, but aren’t technologically compatible.

GDDR5 Vs HBM Side-by-Side:

GDDR5 Vs HBM

GDDR5 Vs HBM

Each of those storage dies contains a new type of memory conceived to take advantage of HBM’s distinctive physical layout. The memory runs at relatively low voltages (1.3V versus 1.5V for GDDR5), lower clock speeds (500MHz versus 1750MHz), and at relatively slow transfer rates (1 Gbps vs. 7 Gbps for GDDR5), but it makes up for those attributes by having an exceptionally wide interface. In this first implementation, each DRAM die in the stack talks to the outside world by way of two 128-bit-wide channels. Each stack, then, has an aggregate interface width of 1024 bits (versus 32 bits for a GDDR5 chip). At 1 Gbps, that works out to 128 GB/s of bandwidth for each memory stack.

Wrap up:

The implementation of HBM at AMD will spread not only to the rest of their product line over time, but also to other manufacturers, and that’s okay, because until now other brands have said it was impossible to pull off. As Joe Macri, Product CTO for AMD, puts it, “People say there’s a wall, but we’re engineers, we build the wall, we can build a new wall, or climb over this one.”

Source: AMD

Sellami Abdelkader Freelance Writer

Computer engineering student at the institute of Electrical and Electronics Engineering in Algeria. Passionate about Web design, Technology and Electronic Gadget.