Skip to main content

3 reasons why I’m worried about Intel’s upcoming Meteor Lake chips

Intel recently unveiled its upcoming Meteor Lake CPU for desktops and laptops, and as a tech enthusiast, I’m really impressed. I’m also really, really worried.

Meteor Lake, or Intel 14th-gen, is still more than a year out, so it’d be silly of me to worry about performance. What I am worried about is how these chips are being developed and manufactured.

Meteor Lake uses too many different nodes and dies

Intel Meteor Lake chip.
Wccftech

AMD has been wielding chiplet technology against Intel to great success for years now. Intel is finally taking notes with Meteor Lake, but its approach to using chiplets couldn’t be more different from AMD’s.

A node or process is how a processor is manufactured, and it’s a critical component to a CPU’s performance and cost of production. The Meteor Lake chip Intel showed off at Hot Chips uses no less than four different nodes, which is a staggering number for a simple mainstream chip. The CPU die uses the cutting-edge Intel 4 process, and according to Tom’s Hardware, the GPU uses TSMC’s cutting-edge 5nm, the IO and SOC dies use TSMC’s 6nm, and the Foveros interposer uses Intel’s old 22nm.

Why so many nodes? Well, Intel has taken a “mix and match” approach to chiplets, and wants to use many dies for processors to achieve maximum customization. While this is certainly a good idea for designing a processor that is perfectly designed for its intended use case, it’s also very expensive. Rather than developing and refining a single chip, Intel is having to test several pieces of silicon, and each one could be on a different process. The cost of making lots of different chiplets is multiplied by the usage of different nodes, which requires Intel’s engineers to be familiar with far more nodes than ever before.

AMD’s approach could not be more different. Its entire CPU portfolio uses just two nodes: TSMC 7nm’s and GlobalFoundries’s 12nm. This is spread out across three dies: the 7nm CPU die, the 12nm desktop IO die, and the 12nm server IO die. AMD also has its two current-generation 7nm APU dies, which are monolithic and not chiplet based.

AMD has accomplished this level of simplicity by combining many functions into a single die. For example, Meteor Lake has separate dies for its graphics, IO, and SOC functions. The upcoming Ryzen 7000 chiplet-based CPUs on the other hand combine all of these into a single die, which allows for desktop Ryzen CPUs to be used for the mobile market in the form of Dragon Range. Granted, the graphics capabilities of AMD’s new CPU (or APU) won’t be particularly great, but it makes sense for its intended use. Meteor Lake is more complex, but the gains don’t seem worth it.

All that gives me concerns about the economic viability of manufacturing these chips. When tech is overly expensive to make, we’re often left wondering what the company will need to do to make it a profitable product in the end.

Dies that can’t be adapted to other markets

AMD CEO Dr. Lisa Su holding an AMD Threadripper processor.
Image used with permission by copyright holder

Speaking of other markets, that is also a key weakness in Intel’s strategy. The double whammy for Meteor Lake’s financial prospects is the fact that Intel has no plans to use any of Meteor Lake’s four dies in different segments, which misses one of the key benefits of using chiplets. Intel wants to harness chiplets to make its CPUs extremely modular and customizable, but that doesn’t seem superior to AMD’s approach.

According to Intel, of Meteor Lake’s four different dies, only the IO and SOC dies will be reused and only in Arrow Lake, which will come with new CPU and GPU chiplets. But this is just desktop and laptops we’re talking about, which means Intel is also making different dies for servers and high-end desktops. Intel might need to deploy as many as a dozen different chiplets to cover the entire CPU market in 2023, nine of which are for Meteor Lake alone. In 2023, AMD looks to be planning to have three chiplets plus one or two monolithic APUs.

That Intel isn’t pursuing a more efficient way to use less nodes and make less dies is baffling to me. AMD already mastered this aspect of chiplets in 2019, and Intel has had every opportunity to follow suit. Intel says this design philosophy is cheaper than monolithic designs and bypasses the issue of needing to use expensive cutting edge processes for the entire CPU, but I’m not convinced. At the very least, using four different dies (two of which use cutting-edge nodes anyways) is undoubtedly more expensive than using two, as AMD does in its CPUs.

CPUs designed like Meteor Lake are vulnerable to delays

The thing I most worry about with Meteor Lake is another delay. Intel is the last company that needs delayed products, especially after the struggles it’s had with its Arc GPU line. This CPU is uniquely vulnerable to delays in a way we haven’t really seen with other processors.

Once again, the mix-and-match method is the problem. Using all of these different nodes and dies greatly increases the chances that somewhere along the way, there’s going to be a problem that forces Meteor Lake and other CPUs designed like it to be delayed. If just a single die doesn’t meet the deadline due to issues with the node or the design, the whole CPU is delayed. The points of failure on Meteor Lake are worryingly high.

Admittedly, this is a pretty speculative point. While there are rumors about delays to the CPU and GPU chiplets in Meteor Lake, they seem unfounded. Nevertheless, Intel is a company that suffered delay after delay on a single point of failure: its 10nm node. TSMC’s 6nm and 5nm nodes are tried and tested, but Intel 4 isn’t, and to avoid a delay, Intel needs to get the design on all four dies correct without major problems — this is what worries me when I look at Intel’s track record.

Intel has a lot riding on its chiplet strategy. The company posted a half-billion dollar loss in the second quarter of this year, its first loss in a very long time, and now the company is going forward with a design philosophy that doesn’t seem to maximize economic viability. Intel only just made a comeback last year with success of Alder Lake, but that good will could easily be undone by higher prices and delays. Let’s hope Intel has a plan to circumvent these problems and deliver a strong showing in 2023.

Matthew Connatser
Former Digital Trends Contributor
Matthew Connatser is a freelance writer who works on writing and updating PC guides at Digital Trends. He first got into PCs…
Reviewers agree: Intel’s latest chip is truly ridiculous
Intel's 14900K CPU socketed in a motherboard.

Intel's "Special Edition" KS chips are meant to be over the top. But the latest Core i9-14900KS has just dropped, and it takes things to new heights of insanity.

It's a super-clocked version of the already ludicrous 14900K that sports the same great quantity of cores, but a boost clock that moves even beyond the extremes of the standard 14900K. It can hit an unprecedented 6.2GHz on a couple of cores right out of the box, making it the fastest CPU by clock speed ever unleashed upon the public.

Read more
Why I’m feeling hopeful about Nvidia’s RTX 50-series GPUs
The RTX 4070 Super on a pink background.

I won't lie -- I was pretty scared of Nvidia's RTX 50-series, and I stand by the opinion that those fears were valid. They didn't come out of thin air; they were fueled by Nvidia's approach to GPU pricing and value for the money.

However, the RTX 40 Super refresh is a step in the right direction, and it's one I never expected to happen. Nvidia's most recent choices show that it may have learned an important lesson, and that's good news for future generations of graphics cards.
The price of performance
Nvidia really didn't hold back in the RTX 40 series. It introduced some of the best graphics cards we've seen in a while, but raw performance isn't the only thing to consider when estimating the value of a GPU. The price is the second major factor and weighing it against performance can often tip the scales from "great" to "disappointing." That was the case with several GPUs in the Ada generation.

Read more
A major era in Intel chip technology may be coming to an end
An Intel processor over a dark blue background.

Intel's next-generation Arrow Lake chips are said to be coming out later this year, but we don't know much about them just yet. However, a new leak shows us that two crucial features may be missing from the next-gen CPU lineup: hyperthreading and support for the AVX-512 extension. If Intel is ditching hyperthreading, it's not entirely unexpected, but it might make it trickier for even its best processors to beat AMD.

Hyperthreading allows physical cores in Intel processors to perform two tasks simultaneously, improving efficiency and performance in multi-threaded applications. Intel first introduced it in 2002, but it hasn't used the technology in every generation of its CPUs between then and now. The tech was all but gone from client processors for many years following its launch, although it was still present in certain models. Since then, Intel has selectively implemented HT across its product stack. In the last few years, it became a staple, especially in midrange and high-end chips.

Read more