Skip to main content

Nvidia’s Turing chip reinvents computer graphics (but not for gaming)

Image used with permission by copyright holder

Nvidia’s latest graphics chip design, called “Turing,” was rumored to be the foundation of the company’s next family of GeForce cards for gamers. Nope. When the company showcased the chips during the SIGGRAPH 2018 conference in Vancouver, British Columbia this week, it highlighted their application in Quadro RTX-branded cards for professionals: the Quadro RTX 8000, the RTX 6000 and the RTX 5000.

The new GPU architecture — Nvidia says it “reinvents computer graphics” — introduces “RT Cores” designed to accelerate ray tracing, a technique in graphics rendering that traces the path of light in a scene so that objects are shaded correctly, light reflects naturally, and shadows fall in their correct locations. Typically this job requires huge amounts of computational power for each frame, taking lots of time to render a photorealistic scene. But Nvidia promises real-time ray tracing, meaning there’s no wait for the cores to render the lighting of each frame.

For PC gaming, that’s a dramatic leap in visual fidelity. The current rendering method requires a technique called rasterization, which converts the 3D scene into 2D data that’s accepted by the connected monitor. To re-create the 3D environment, the program uses “shaders” to handle the different levels of light, darkness, and color.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“The Turing architecture dramatically improves raster performance over the previous Pascal generation with an enhanced graphics pipeline and new programmable shading technologies,” the company says. “These technologies include variable-rate shading, texture-space shading, and multi-view rendering, which provide for more fluid interactivity with large models and scenes and improved VR experiences.”

According to Nvidia, Turing is the next big leap since the introduction of CUDA. Not familiar with CUDA? Graphics cards and discrete GPUs once merely accelerated games for better visual fidelity. In 2006, Nvidia introduced the integrated CUDA platform that lets its chips handle general computing as well. In essence, this lets a graphics chip work in parallel with a PC’s main processor to handle larger loads at a faster pace. As Nvidia states, Turing promises to be another transition point in computing.

In addition to RT Cores for ray tracing, Turing also relies on Tensor Cores to accelerate artificial intelligence. (Nvidia, which apparently showers in money, handed out $3,000 Titan V graphics cards for free to A.I. researchers in June.) Tensor Cores will accelerate video re-timing, resolution scaling and more for creating “applications with powerful new capabilities.” Turing also includes a new streaming multiprocessor architecture capable of 16 trillion floating point operations along with 16 trillion integer operations each second.

The new Quadro RTX 8000 consists of 4,608 CUDA cores and 576 Tensor cores capable of rendering 10 GigaRays per second, which is a measurement of how many rays can be rendered per pixel each second at a specific frame rate. The card also includes 48GB of onboard memory but capable of using 96GB through NVLink.

Meanwhile, the RTX 6000 is similar save for the memory: 24GB of onboard memory and 48GB through NVLink. The RTX 5000 consists of 3,072 cores, 384 Tensor cores and 16GB of onboard memory (32GB via NVLink). It’s capable of six GigaRays per second.

Companies already on the Quadro RTX bandwagon include Adobe, Autodesk, Dell, Epic Games, HP, Lenovo, Pixar and more.

For gamers, Nvidia’s next big Turing-based reveal is expected to be the GeForce RTX 2080 — not the previously rumored GTX 1180 — during its pre-show Gamescom press event on the 20th of August. Clever.

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
New Nvidia update suggests DLSS 4.0 is closer than we thought
A hand holding the RTX 4090 GPU.

Nvidia might be gearing up for DLSS 4.0. A new update for Nvidia's Streamline pipeline includes updated files for DLSS Super Resolution and Frame Generation that bring the version to 3.7.

This is a fairly small update aimed at developers. I haven't had a chance to try out DLSS 3.7, but like previous updates, I assume this includes some small tweaks to performance and image quality. Nvidia commits these updates pretty frequently, usually centered around reducing visual artifacts in games.

Read more
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more