A set of images of Nvidia’s supposed GP104 die has hit the internet, showing what will likely be used on the company’s upcoming GeForce GTX 1070 graphics card. The die is estimated to be around 333 square mm in size, and based on a previously leaked photo of the printed circuit board, it will use conventional GDDR5 memory. The next card in line, the beefier GTX 1080, is expected to use faster GDDR5X memory.
The leaked images show a GP104-200-A1 die marked with a “sample” stamp on its faceplate. It’s reportedly a cut-down version of the GP104-400 die, but is similar in overall appearance. The only real difference between the two, according to the image leakers, is a modified GPU configuration. Overall, cards based on the GP104 silicon are expected to have three DisplayPorts, HDMI, and DVI.
The images arrive after reports surfaced in March that the GeForce GTX 1080 will use GDDR5X memory, and not HBM2 (second-generation High Bandwidth Memory). Rumors suggest Nvidia will announce the card this month, which packs the GP104-400 “Pascal” graphics silicon, 8GB of GDDR5X memory, an 8-pin power connector, two DisplayPorts, an HDMI port, and a DVI port. The memory bus will be 256-bit.
As a brief explainer, GDDR5X doubles the bandwidth of GDDR5, pushing 14Gbps per pin. GDDR5X is also half the size of GDDR5, meaning graphics card makers can cram more memory into the same-sized real estate. It’s really not meant for higher-end cards, but as a “complementary” product to HBM. It’s even a smoother upgrade from GDDR5 than what’s seen when moving from GDDR4 to GDDR5.
Many Nvidia fans were hoping that the new GeForce cards would use HBM memory instead. As previously explained, this memory solution offers up to three times the bandwidth per watt when compared to GDDR5 memory. It also saves physical space due to its “stacked” nature, meaning the memory chips are stacked vertically instead of spread out around the GPU chip. Thus, it’s smaller than GDDR5, consumes less power, and achieves a higher bandwidth in the process.
Adding to that, the second-generation HBM solution was approved back in January and will supposedly stack even more memory chips in the same height, doubling the original HBM’s throughput.
Nvidia is already using HBM2 as shown during the GPU Technology Conference 2016 keynote earlier this month. According to Nvidia, HBM2 delivers three times the memory performance of the Maxwell architecture. The Tesla P100 card itself has 16GB of CoWoS (Chip-on Wafer-on-Substrate) memory and a memory bandwidth of 720GB/s. Nvidia has crammed eight of these cards into its new rack-mounted DGX-1 supercomputer for AI and deep learning development.
However, as PC Perspective points out, it’s not the memory but the core chip that actually “defines” the performance of a graphics card (although loads of fast memory helps). AMD’s Fury X card was the first to use HBM but still under-performed when compared to Nvidia’s GeForce GTX 980 Ti, which uses standard GDDR5 memory.
Given we’re in the back half of April, Nvidia will likely reveal the GeForce GTX 1080 and GeForce GTX 1070 Pascal cards this summer with a Q3 2016 availability. There are supposedly three GeForce cards in production that use the GP104 silicon, and two more that will likely make their debut in Nvidia’s professional line of Quadro graphics cards. That said, we’ll just have to play the wait-and-see game for now.