Skip to main content

Nvidia defies pushback, defends 8GB of VRAM in recent GPUs

RTX 4060 Ti sitting on a pink background.
Jacob Roach / Digital Trends

Nvidia’s CEO Jensen Huang is defending the recently-launched RTX 4060 Ti, and in particular, its 8GB of VRAM. The executive spoke about gaming and recent GPU releases in a roundtable interview with reporters at Computex 2023, where he faced questions about the limited VRAM on Nvidia’s most recent GPU.

PCWorld shared a quote in which Huang defended the 8GB of VRAM and told gamers to focus more on how that VRAM is managed: “Remember the frame buffer is not the memory of the computer — it is a cache. And how you manage the cache is a big deal. It is like any other cache. And yes, the bigger the cache is, the better. However, you’re trading off against so many things.”

The RTX 4060 Ti has been met with a lot of criticism due to its 8GB of VRAM, especially as demands for video memory have scaled up in recent PC releases such as Resident Evil 4, Hogwarts Legacy, and The Last of Us Part One. Huang’s response seems like a defense, but it doesn’t really talk about the issues with VRAM in the RTX 4060 Ti.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

As Huang says, VRAM is essentially a cache. Data is moved from your storage into your system memory and finally into VRAM. The problem is that going out to system memory is remarkably slow, which is a big reason why we’ve seen major stuttering issues and crashes in games like The Last of Us Part One and Hogwarts Legacy. Huang says that managing VRAM is like “kung fu,” and although that may be true in theory, we’ve seen that management fall short in practice.

Huang also didn’t address what exactly gamers are trading off with a larger VRAM capacity, either. Perhaps power, perhaps cost, or perhaps some combination of multiple different factors; it’s hard to tell. Nvidia seems to recognize that capacity is an issue, though, and it’s releasing an RTX 4060 Ti variation with 16GB of VRAM in July. It’s the same GPU with the same performance, just with more VRAM and priced $100 higher than the base model.

Nvidia CEO showing the RTX 4060 Ti at Computex 2023.
Nvidia

Following its Computex keynote, Nvidia became the first chip designer ever to reach a value of over $1 trillion, joining only five other companies currently at that mark. It rang hollow for many gamers, however, as Nvidia largely focused on AI during its keynote, short of the new Nvidia ACE engine for generative AI in games.

In a response to PCWorld, Huang said that gamers still come first for Nvidia. “Without AI, we could not do ray tracing in real time. It was not even possible,” Huang said. “And the first AI project in our company — the number one AI focus was Deep Learning Super Sampling (DLSS). Deep learning. That is the pillar of RTX.”

It’s fair for gamers to feel left behind, however. Nvidia’s most recent generation has pushed GPU prices higher than they’ve ever gone, and in Nvidia’s most recent earnings call, its data center and AI revenue was double what it made from gaming. In addition, Nvidia’s public showcases have increasingly focused on AI, with supercomputers like the DGX GH200 taking center stage.

Editors' Recommendations

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Nvidia DLSS is amazing, but only if you use it the right way
Lies of P on the KTC G42P5.

Nvidia's Deep Learning Super Sampling, or DLSS, has become a cornerstone feature of modern PC games. It started as a way to boost your performance by rendering a game at a lower resolution, but the prominence and popularity of DLSS have prompted Nvidia to add even more features under the name.

Today, DLSS incorporates several different features, all of which leverage AI to boost performance and/or image quality. It can be intimidating if you're a new RTX user, so I'm here to break down all of the increases of DLSS in 2024 and how you can best leverage it in supported games.
The many features of DLSS

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more
GPU prices and availability (Q1 2024): How much are GPUs today?
An AMD Radeon RX 6500XT placed on a motherboard.

The GPU shortage is over, and gamers around the world can breathe a sigh of relief. For those in the market for one of the best graphics cards, we looked closely at graphics card prices and availability to determine where the GPU market is headed and the best time to buy.

If you're looking for a cheap GPU deal, now is the time to buy. Cards from AMD and Nvidia usually hover around the recommended list price, but some models are actually priced well below that, and the same goes for Intel GPUs.

Read more