Skip to main content

AMD’s multi-chiplet GPU design might finally come true

RX 7900 XTX installed in a test bench.
Jacob Roach / Digital Trends

An interesting AMD patent has just surfaced, and although it was filed a while back, finding it now is all the more exciting because this tech might be closer to appearing in future graphics cards. The patent describes a multi-chiplet GPU with three separate dies, which is something that could both improve performance and cut back on production costs.

In the patent, AMD refers to a GPU that’s partitioned into multiple dies, which it refers to as GPU chiplets. These chiplets, or dies, can either function together as a single GPU or work as multiple GPUs in what AMD refers to as “second mode.” The GPU has three modes in total, the first of which makes all the chiplets work together as a single, unified GPU. This enables it to share resources and, as Tom’s Hardware says, allows the front-end die to deal with command scheduling for all the shader engine dies. This is similar to what a regular, non-chiplet GPU would do.

The second mode is where it gets interesting. In this mode, every chiplet counts as an independent GPU. Each GPU handles its own task scheduling within its shader engines and doesn’t interfere with the other chiplets. Finally, the third mode is a mix of the two, where some GPUs function as their own entity while others combine the chiplets to function together.

The design of an AMD multi-chiplet GPU.
AMD

As mentioned, this patent is not new. It was filed on December 8, 2022, just after AMD released the RX 7900 XTX and the RX 7900 XT. Although leakers have predicted that AMD might go down the multi-chiplet route for at least a generation or two now, this architecture is currently only really used in AMD’s data center GPUs. AMD has already dipped its toes in similar tech in RDNA 3, though, with a design that used a graphics compute die (GCD) and multiple memory cache dies (MCMs) for the memory interface.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

There are tangible benefits to switching to this type of architecture, as per the patent: “By dividing the GPU into multiple GPU chiplets, the processing system flexibly and cost-effectively configures an amount of active GPU physical resources based on an operating mode.” If it could turn out to be cheaper to produce these types of GPUs rather than using increasingly larger monolithic dies, we might start seeing this design outside of the data center and in the GPUs we all use in our own computers.

Early leaks about RDNA 4 graphics cards teased AMD going with a full multi-chiplet design, and it’s easy to imagine that the final result could have resembled what we see in the patent. However, with the news that AMD is sticking to midrange graphics cards in this next generation, any hope of a multi-chiplet GPU seems lost for now. Perhaps we’ll see this design come to life in RDNA 5.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Intel may fire the first shots in the next-gen GPU war
Intel Arc A770 GPU installed in a test bench.

The GPU market is about to start heating up in just a few short months, and that's not just due to AMD and Nvidia. According to a new report, Intel plans to release its highly anticipated, next-gen Arc Battlemage graphics cards sooner than many have expected, and the GPUs might drop at just the perfect time to steal some sales away from AMD and Nvidia.

The tantalizing news comes from a report by ComputerBase. The publication claims that during Embedded World 2024, an event that took place in Germany, Intel's partners implied that Arc Battlemage GPUs might launch before this year's Black Friday. Realistically, this implies that Intel would have to hit the market in early November at the latest, giving its partners and retailers enough time to make the products readily available during the Black Friday shopping craze.

Read more
GPU prices are back on the rise again
RTX 4060 Ti sitting next to the RTX 4070.

We haven't had to worry about the prices of some of the best graphics cards for quite some time. With most GPUs sold around their recommended retail price, there are plenty of options for PC builders in need of a new graphics card. However, a new report indicates that we might see an increase in GPU prices, especially on the cards made by Nvidia's add-in board partners (AIBs). Is it time to start worrying about another GPU shortage? Not quite, but it might be better to shop now before it gets worse.

The grim news comes from IT Home, a Chinese tech publication that cites anonymous "industry sources" as it predicts that Nvidia's AIBs are about to raise their prices by up to 10% on average -- and this won't be limited to high-end GPUs along the lines of the RTX 4090. In fact, IT Home reports that the RTX 4070 Super has already received a price increase of about 100 yuan, which equals roughly $14 at the time of this writing. This is a subtle price increase given that the GPU costs $550 to $600, but according to the report, it might just be the beginning.

Read more
The sad reality of AMD’s next-gen GPUs comes into view
The AMD RX 7900 graphics card on a pink background.

For months now, various leakers agreed on one thing -- AMD is tapping out of the high-end GPU race in this generation, leaving Nvidia to focus on making the best graphics cards with no competitor. Today's new finding may confirm that theory, as the first RDNA 4 GPU to make an official appearance is one that has been speculated about for months: Navi48.

Following the typical naming convention for AMD, the flagship in the RDNA 4 generation should have been called Navi41 -- and it very well might have been, but according to various sources, that GPU will not be making an appearance in this generation. Hence, the flagship is now said to be the Navi48, and the latest finding shared by Kepler_L2 on X tells us that might indeed be the case.

Read more