Skip to main content

The RTX 4080 unlaunch is the worst news for GPU prices since crypto

Nvidia is “unlaunching” the 12GB RTX 4080, which is such a strange move that Nvidia made up a whole new word to mark the occasion. On one hand, it’s a positive development for a card that most of us thought was a very bad idea. On the other hand, it’s also a very worrying sign for already rising GPU prices.

At $900, the 12GB RTX 4080 looked like a slight price increase over the previous generation — about $200. It’s a price hike, but one that Nvidia could easily defend. Nvidia is on the record saying that the cost of manufacturing GPUs is going up, and the RTX 4090 (which has already proved very popular) saw a similar $100 price increase over the previous generation.

A hand grabbing MSI's RTX 4090 Suprim X.
Jacob Roach / Digital Trends

It’s hard to defend the price hikes now, though. The 16GB RTX 4080 is just the RTX 4080 now, which means we can truly compare prices one-to-one. At $1,200, the RTX 4080 is a full $500 more expensive than the RTX 3080. The cost of manufacturing may be going up, but a $100 to $200 price increase is much different than a $500 price hike.

When Nvidia announced the RTX 4080 models originally, the community saw the 16GB model as being the “true” RTX 4080 and the 12GB model as being a rebranded RTX 4070. Nvidia didn’t see the two models that way. In briefings with press, Nvidia described the 12GB version as the base RTX 4080, while the 16GB model was an “enhanced” version — maybe something similar to the $1,200 RTX 3080 Ti that launched in the previous generation.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Nvidia’s backpedaling tells a different story. It’s impossible to say if Nvidia actually saw the 16GB version as a true successor to the RTX 3080 or it shifted stances following the overwhelmingly positive reception of the RTX 4090. But it doesn’t matter. With the 12GB model officially in the can, Nvidia is sending a clear message: GPUs in this class should sell above $1,000.

There has been speculation that Nvidia planned to cement this class of GPU above $1,000 for a while. The $1,200 price tag on the RTX 3080 Ti came as a shock — it was a $200 increase over the RTX 2080 Ti — and the 12GB RTX 3080 never received an official list price, but sold above $1,000 at launch.

RTX 3080 graphics cards among other GPUs.
Jacob Roach / Digital Trends.

We just came out of a GPU shortage, which skyrocketed prices on the back of increased demand from cryptocurrency miners and limited supply. In many cases, gamers were spending two times the list price for a GPU, even for cards available at retailers. List prices remained in act, though, and now that the GPU shortage is over, cards have slipped and settled back in toward their list prices. Most RTX 3080 models, for example, are available around $700.

That demonstrates the power of list price. Although supply and demand ultimately dictate how much graphics cards cost, the list price sets the goal post — when supply and demand are in balance, GPUs should sell for their list price. Nvidia is moving that goal post with the RTX 4080.

Similarly, GPU classifications have power. Although Nvidia CEO Jensen Huang is on the record as saying “it’s just a number” in reference to the two RTX 4080 models, Nvidia clearly understands how segmenting two GPUs under that name is misleading. It’s good that Nvidia is finally recognizing that fact, but it shouldn’t distract from the $500 price increase that the RTX 4080 now carries.

It’s easy to get wrapped up in speculation about Nvidia (and AMD, for that matter) increasing list prices to match scalper prices. Up to this point, though, that’s all it has been: speculation. We now have a clear sign of rising GPU prices, not due to demand or supply or scalpers, but simply due to charging more generation over generation.

Editors' Recommendations

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
The RTX 4090 is past its prime, and that’s OK
Nvidia GeForce RTX 4090 GPU.

In October 2022, when I first reviewed the RTX 4090, I called it "both a complete waste of money and the most powerful graphics card ever made." That's even more true now that it was more than a year ago. The AI boom shortly after the launch of the RTX 4090, combined with some international restrictions on the GPU, has caused prices to skyrocket to unattainable places, moving the affordability from unlikely to basically impossible.

But that's changing. Reports indicate that prices are slowly dropping, moving from a high of $2,200 down to around $2,000. That's still way above the GPU's list price of $1,600, but the trajectory now is at least positive.

Read more
Using an RTX 3060? Here’s the GPU to upgrade to next
EVGA RTX 3060 sitting on a table.

Nvidia's RTX 3060 is a certified legend. It's the most popular graphics card in gaming PCs, according to the Steam hardware survey, and that makes sense. For gamers playing at 1080p, you can't ask for more than what the RTX 3060 offers between its low price, 12GB of VRAM, and features like Nvidia's Deep Learning Super Sampling (DLSS).

But where do you go from there? If you picked up an RTX 3060 over the last couple of years and you're looking to take your PC gaming to the next level, I rounded up the best GPUs to upgrade to from the RTX 3060.

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more