Skip to main content

Nvidia’s Turing chip reinvents computer graphics (but not for gaming)

Image used with permission by copyright holder

Nvidia’s latest graphics chip design, called “Turing,” was rumored to be the foundation of the company’s next family of GeForce cards for gamers. Nope. When the company showcased the chips during the SIGGRAPH 2018 conference in Vancouver, British Columbia this week, it highlighted their application in Quadro RTX-branded cards for professionals: the Quadro RTX 8000, the RTX 6000 and the RTX 5000.

The new GPU architecture — Nvidia says it “reinvents computer graphics” — introduces “RT Cores” designed to accelerate ray tracing, a technique in graphics rendering that traces the path of light in a scene so that objects are shaded correctly, light reflects naturally, and shadows fall in their correct locations. Typically this job requires huge amounts of computational power for each frame, taking lots of time to render a photorealistic scene. But Nvidia promises real-time ray tracing, meaning there’s no wait for the cores to render the lighting of each frame.

For PC gaming, that’s a dramatic leap in visual fidelity. The current rendering method requires a technique called rasterization, which converts the 3D scene into 2D data that’s accepted by the connected monitor. To re-create the 3D environment, the program uses “shaders” to handle the different levels of light, darkness, and color.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“The Turing architecture dramatically improves raster performance over the previous Pascal generation with an enhanced graphics pipeline and new programmable shading technologies,” the company says. “These technologies include variable-rate shading, texture-space shading, and multi-view rendering, which provide for more fluid interactivity with large models and scenes and improved VR experiences.”

According to Nvidia, Turing is the next big leap since the introduction of CUDA. Not familiar with CUDA? Graphics cards and discrete GPUs once merely accelerated games for better visual fidelity. In 2006, Nvidia introduced the integrated CUDA platform that lets its chips handle general computing as well. In essence, this lets a graphics chip work in parallel with a PC’s main processor to handle larger loads at a faster pace. As Nvidia states, Turing promises to be another transition point in computing.

In addition to RT Cores for ray tracing, Turing also relies on Tensor Cores to accelerate artificial intelligence. (Nvidia, which apparently showers in money, handed out $3,000 Titan V graphics cards for free to A.I. researchers in June.) Tensor Cores will accelerate video re-timing, resolution scaling and more for creating “applications with powerful new capabilities.” Turing also includes a new streaming multiprocessor architecture capable of 16 trillion floating point operations along with 16 trillion integer operations each second.

The new Quadro RTX 8000 consists of 4,608 CUDA cores and 576 Tensor cores capable of rendering 10 GigaRays per second, which is a measurement of how many rays can be rendered per pixel each second at a specific frame rate. The card also includes 48GB of onboard memory but capable of using 96GB through NVLink.

Meanwhile, the RTX 6000 is similar save for the memory: 24GB of onboard memory and 48GB through NVLink. The RTX 5000 consists of 3,072 cores, 384 Tensor cores and 16GB of onboard memory (32GB via NVLink). It’s capable of six GigaRays per second.

Companies already on the Quadro RTX bandwagon include Adobe, Autodesk, Dell, Epic Games, HP, Lenovo, Pixar and more.

For gamers, Nvidia’s next big Turing-based reveal is expected to be the GeForce RTX 2080 — not the previously rumored GTX 1180 — during its pre-show Gamescom press event on the 20th of August. Clever.

Editors' Recommendations

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Nvidia RTX 50-series graphics cards: news, release date, price, and more
RTX 4070 seen from the side.

Nvidia already makes some of the best graphics cards, but it's also not resting on its laurels. Although the RTX 40-series, which has been bolstered by a refresh, is still very recent, Nvidia is also working on its next-gen GPUs from the RTX 50-series.

The release date of RTX 50-series GPUs is still at least a few months away, but various rumors and leaks give us a better idea of what to expect. Here's everything we know about Nvidia's upcoming generation of graphics cards.
RTX 50-series: pricing and release date

Read more
New Nvidia update suggests DLSS 4.0 is closer than we thought
A hand holding the RTX 4090 GPU.

Nvidia might be gearing up for DLSS 4.0. A new update for Nvidia's Streamline pipeline includes updated files for DLSS Super Resolution and Frame Generation that bring the version to 3.7.

This is a fairly small update aimed at developers. I haven't had a chance to try out DLSS 3.7, but like previous updates, I assume this includes some small tweaks to performance and image quality. Nvidia commits these updates pretty frequently, usually centered around reducing visual artifacts in games.

Read more
You shouldn’t buy these Nvidia GPUs right now
RTX 4060 Ti sitting on a pink background.

Buying a new GPU in this generation is a bit of a tricky minefield of graphics cards to steer clear of. Sometimes, the performance is there, but the value is not; other times, you could get something much more capable for the same amount of money.

While Nvidia makes some of the best GPUs, it's certainly no stranger to that performance vs. value dilemma. Below, I'll show you three Nvidia graphics cards you're better off avoiding right now and tell you their much better alternatives.
RTX 4060 Ti

Read more