Skip to main content

Why I don’t upgrade my CPU for higher frame rates anymore

Although GPUs are often the focus for gaming, CPUs are perceived as an important upgrade too. According to AMD and Intel, we all need the fastest Ryzen 7 5800X3D or the Core i9-12900KS to get truly good gaming performance in our games.

And that’s partially true. If you upgrade from a CPU made five years ago to one made today, you’ll get way more frames.

But when you’re considering what expensive component to upgrade to your PC next, the CPU probably shouldn’t be your first choice. For all the rhetoric surrounding CPUs and gaming performance, midrange and even low-end CPUs made within the last 5 years could perform better than you might think.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

My own upgrade path

Installed CPU on a motherboard.
Image used with permission by copyright holder

Like many people. I learned just how ineffective certain upgrades can be the hard way: through first-hand experience.

I’ve had my ups and downs with CPU upgrades over the years. My first PC had an AMD A8-7650K, an APU with integrated graphics. Not exactly a monster gaming PC, but that’s where my journey began. In striving for my goal to hit better frame rates, I turned first to upgrade my GPU, as you should. Moving up to the discrete Radeon R9 380 significantly boosted performance in games, but I still wasn’t satisfied with the results. My machine still struggled to hit 60 fps (frames per second) in games like The Witcher 3, as was my goal.

I figured that if upgrading my GPU couldn’t get me there, I should upgrade my CPU next. I tried the Athlon 860K, which had a higher clock speed, but no luck. I then tried the Athlon 880K with an even higher clock speed, but once again my frame rate didn’t improve. Fed up with my results, I decided to wait for AMD’s first-generation Ryzen chips, which were just around the corner.

As soon as Ryzen 1000 launched, I upgraded to a Ryzen 7 1700 and thankfully I could finally get 60 fps in pretty much every game I played. Unfortunately, this created an incorrect perception in my head about the benefit of CPU upgrading. Perhaps you’ve had a similar experience, but I went into my cycles of Ryzen upgrades eager to see how they would transform my PC’s gaming performance. That was even more heightened by the claims from AMD and reviewers about how Ryzen was finally on par with Intel for gaming performance.

You can imagine my disappointment, then, when I saw nearly identical frame rates after multiple generations of upgrades. The realization hit hard that when it comes to CPU performance, it’s far more complicated than I’d previously thought.

CPU benchmarking is complicated

AMD Rizen CPU 7 box being held in a hand.
Bill Roberson/Digital Trends

If you’re anything like me, you may have believed at one point that CPU benchmarking works just like GPU benchmarking. But as I learned, it doesn’t.

The greatest strength of GPUs is flexibility. If you have a GPU and it’s not achieving a good enough frame rate, you can just simply turn down graphical quality settings to get more frames. Or if you think you have more than enough frames, you can exchange them for higher quality visuals.

Let’s say you have two graphics cards, the AMD RX 6950 XT and an RX 6650 XT, and you want to know how they compare to each other when paired with a top-end CPU. If you test at 1440p and the maximum settings in a really demanding game like Cyberpunk 2077, you’ll find that the 6950 XT is about 70% faster. If you lower the settings, both GPUs will get more frames but the 6950 XT will still be about 70% faster, at least in Cyberpunk 2077. That’s how GPU benchmarking works, and it’s pretty intuitive.

CPUs just don’t have as much headroom as GPUs do.

But CPU benchmarking is far more complicated. While benchmarking my Ryzen 1700, 2700, and 3700X, I tried replicating the results that other reviewers had gotten, such as the claim that the 3700X was about 15% faster than the 1800X. By overclocking my 1700, I basically had an 1800X I could test. However, the 3700X was only tying the overclocked 1700 and I couldn’t replicate the lead that the reviews showed.

Then I tried dropping graphics settings in order to increase the frame rate I was getting in games, which is something I normally wouldn’t do since I was already getting my preferred 90 fps. As soon as I did that, I began to see the 3700X pull away from the 1700 in most of the games I played and was able to replicate that 15% lead that sites were reporting. Therein lied my mistake.

Why does this happen in CPU benchmarking? Why was the race neck and neck with some settings but then not close at all with others? Well, as it turns out, it all comes down to the differences between CPU and GPU bottlenecking.

Identifying bottlenecks

A bottleneck occurs when one component is so slow that other components are being held back, which means in order to obtain higher performance, you either need to upgrade the component causing the bottleneck or tweak the settings to shift the bottleneck somewhere else. As graphics settings are lowered, the performance limitations of the CPU become more relevant until it becomes the bottleneck.

The issue is that even the best CPUs don’t have as much headroom as GPUs do. Games are focused on your graphics card, so settings tweaks can have a huge impact if you’re being limited by your GPU. On the other hand, only a select few settings impact your CPU, leading to a hard performance wall that you can’t get past no matter how many settings you change. CPUs simply are nowhere near as flexible as GPUs when it comes to gaming performance.

Tales of Arise on the Sony InZone M9 gaming monitor.
Jacob Roach / Digital Trends

Reviews that test at multiple resolutions and with GPUs illustrate the odd nature of the CPU bottleneck pretty clearly. In this Ryzen 5 1600 and Ryzen 5 5600 comparison, Techspot tested with the 6950 XT and the 6600 XT at both 1080p and 1440p. At 1440p with the 6600 XT, the 1600 and 5600 are often neck and neck, but when the PC is allowed to achieve higher frame rates thanks to lowering the resolution and upgrading to the 6950 XT, the 5600 pulls away. On average, the 5600 was only 16% faster using the 6600 XT at 1440p, but that gap widens to over 70% using the 6950 XT at 1080p. Basically, CPUs have inherent frame rate caps depending on the game, and the slower the CPU, the lower the cap.

You might also wonder why the 5600 can be so much faster than the 1600. They’re both six-core CPUs based on similar architectures and have similar clock speeds. The key difference is cache and latency. The 5600 has 32MB of L3 cache, compared to the 16MB of the 1600, and the cores in the 5600 can communicate with each other much more quickly on average than the cores in the 1600. While GPUs with more cores are great for gaming, for CPUs the ability to quickly move small amounts of data around is king.

This is the reason why I never saw any difference between my old A8-7650K, the 860K, and the 880K. As it turns out, they all have the same exact amount of cache and are pretty similar in general, so I could have never expected better gaming performance. However, the 3700X has double the cache of the 1700 and 2700, so why didn’t I see a frame rate increase after I upgraded to the 3700X? That was because I actually didn’t need to upgrade.

How to know you need an upgrade

A hand holds the Intel Core i9-12900KS.
Jacob Roach / Digital Trends

With all that aside, I’m not saying you never need to upgrade your CPU. Of course not. You just need to proceed with caution.

Whether or not you need to upgrade depends primarily on what kind of frame rates you want to see in a given game. If I had a clearer idea of what I wanted to achieve, I would have avoided the mistakes I made early on in my own upgrade path. For example, I usually game at around 60 to 90 fps because I tend to prefer to increase graphics settings when possible. Of course, this is why I never saw a higher frame rate after upgrading to the Ryzen 3700X. The Ryzen 1700 is already capable of 60 to 90 fps in most games. Before I got that 3700X, I should have done two things.

Firstly, I should have checked my GPU usage in my most played games by using Task Manager or MSI Afterburner. If you’re consistently seeing around 97% GPU usage, a CPU upgrade won’t improve your frame rates, because you’re clearly being bottlenecked by your GPU. The amount of GPU usage also matters. For example, if it’s around 80-90%, upgrading your CPU will increase your frame rate, but not by much. By contrast, if your GPU usage is close to 50%, you could potentially double your frame rate by upgrading to a better CPU.

The other thing I should have done was think about the kind of frame rates I wanted to see. In the Ryzen 3000 reviews, I saw that the 3700X was much faster than the 1700 and 2700, but I failed to note that reviewers like to test in high frame rate scenarios in order to show the capabilities of new CPUs. While that is interesting for enthusiasts, sometimes that can be misleading. If I was aiming for frame rates like closer to 200 fps, the 3700X would have been a noticeable upgrade. If you’re aiming for really high frame rates and you notice your GPU usage is low, that’s a sure sign an upgrade is a good idea.

There is one caveat to all this: some games just don’t benefit from better hardware. My GPU usage was low when I tried to play Total War: Attila on my Ryzen 1700, so I should have seen a big boost on the 3700X — and yet I didn’t. This is something that can happen with certain games, especially older ones, that are optimized poorly or have performance-reducing bugs. Before you upgrade, research your games and make sure people aren’t complaining about them running poorly on high end hardware.

Matthew Connatser
Former Digital Trends Contributor
Matthew Connatser is a freelance writer who works on writing and updating PC guides at Digital Trends. He first got into PCs…
Don’t believe the hype — the era of native resolution gaming isn’t over
Alan Wake looking at a projection of himself.

Native resolution is dead, or so the story goes. A string of PC games released this year, with the most recent being Alan Wake 2, have come under fire for basically requiring some form of upscaling to achieve decent performance. Understandably, there's been some backlash from PC gamers, who feel as if the idea of running a game at native resolution is quickly becoming a bygone era.

There's some truth to that, but the idea that games will rely on half-baked upscalers to achieve reasonable performance instead of "optimization" is misguided at best -- and downright inaccurate at worst. Tools like Nvidia's Deep Learning Super Sampling (DLSS) and AMD's FidelityFX Super Resolution (FSR) will continue to be a cornerstone of PC gaming, but here's why they can't replace native resolution entirely.
The outcry
Let's start with why PC gamers have the impression that native resolution is dead. The most recent outcry came over Alan Wake 2 when the system requirements revealed that the game was built around having either DLSS or FSR turned on. That's not a new scenario, either. The developers of Remnant 2 confirmed the game was designed around upscaling, and you'd be hard-pressed to find a AAA release in the last few years that didn't pack in upscaling tech.

Read more
I can’t get excited about AMD’s next version of FSR anymore
Hero art for Forspoken

AMD's FidelityFX Super Resolution 3 is available after nearly a year of waiting. The company announced the feature around November of last year, in a swift response to Nvidia's, at the time, new Deep Learning Super Sampling 3 (DLSS 3). AMD's pitch was simple. The company was going to deliver the same performance-multiplying feature that generates frames instead of rendering them, and it would work with any graphics card.

Now it's here, and on paper, FSR 3 does exactly what AMD claimed. It's clear AMD has a lot more work to do to make FSR 3 work properly, though. And after almost a year of waiting for the feature to arrive, it's hard betting on promises for what FSR 3 could be in the future.
Where are the games?

Read more