Intel, AMD, and Qualcomm: What Do Their Next-Gen NPUs Have to Offer?

Key Takeaways

  • Intel’s NPU 4 offers 48 TOPS, outperforming previous chips significantly.
  • AMD’s XDNA 2 boasts 50 TOPS, with dynamic programmability for AI processing.
  • Qualcomm’s Snapdragon X Elite offers 45 TOPS, with reported power consumption issues.



Neural Processing Units or NPUs are the newest hardware to be mentioned by processor manufacturers. These chips are supposed to be built to handle AI processing, and each of the big manufacturers has their own take on NPUs. Let’s see what we’re in for in the near future.


NPUs and Their Place In Laptops

If by now you haven’t heard about AI PCs and their need for more processing power, you might not be aware of how far NPUs have come. NPUs are supposed to be an addition to existing CPUs, but built specifically to handle AI processing. Chip manufacturers have noticed how important it is to have AI-specific processing on chips, and NPUs are a move to address that.

complex machine with multiple input streams converging into a central processing unit
Dibakar Ghosh / How-To Geek | Midjourney


Snapdragon’s X Elite was the first of the new NPUs, but Intel and AMD have been upping the ante, both in processing power and architecture design. Each of these manufacturers has a line of chips specifically designed to run on Copilot+ PCs, which is supposed to unlock the potential of Microsoft’s AI assistant.

These chips are designed primarily for laptop use, but that doesn’t mean we won’t see desktop versions of NPU chips in the future. Comparing what’s already available with the reports of what to expect gives us an idea of what’s going on in NPU development and what we can expect in the near future.

Intel’s Next-Gen NPU: NPU 4

Intel’s newest chip integrates an NPU along with several fancy additions that make for much more efficient AI processing. NPU 4 is included in Intel’s Lunar Lake architecture. When you dig under the surface, this NPU offers a lot of benefits for AI application processing.


Intel claims that the NPU can offer peak processing of 48 Tera Operations per Second (TOPS), making it Intel’s most powerful NPU innovation to date. The current generation of NPU 4 outpaces Intel’s last generation NPU 3 significantly, offering more than four times the computation power.

This is partially due to the chip’s architecture. The chip runs two parallel NPUs, each with its own independent inference pipeline, making for better interpretation of task data. Each unit has its own multiply-accumulate (MAC) array, which helps with doing complex matrix math needed in AI processing. To top it all off, the SHAVE DSP included in each unit allows for as much as four times the processing power of its predecessor, allowing for more complex neural network creation.


Research into Lunar Lake suggests that it may be able to run Microsoft’s Copilot locally. What’s more, Lunar Lake is likely to have a very efficient power signature. Intel states that their Thread Director innovation will reduce the energy consumption for the NPU, making it an attractive option for low-power applications.

When can we get our hands on this technology? While Intel initially suiggested a soft launch of the chip, Lunar Lake laptops are now here.

AMD’s Next-Gen NPU: XDNA 2

So what does AMD have to offer on the NPU horizon? AMD’s latest foray into the NPU market is the XDNA 2 chip. The company claims that it can outperform Intel’s NPU 4, offering 50 TOPS to the NPU 4’s 48. The second iteration of XDNA sees AMD increase the 20 engine tiles on the chip to 32 engine tiles. This increase results in a five-time increase in NPU TOPS between generations.

The XDNA architecture itself is based on a spatial flow data architecture. AMD designed it to be flexible and adaptable, with dynamic programmability to make custom hierarchies for AI processing. The XDNA 2 engine is an innovation in processing technology on its own.


I mentioned that the NPU has engine tiles, and this is crucial to the adaptability of the system. Each tile can be considered an independent worker with its own computation and memory resources. This independent processing approach allows the chip to do more while requiring less power to do so. The chip was designed to handle both standard processing and AI processing, like Microsoft’s Copilot.

AMD was notably the first chip processor to include an NPU in their chips. While that has put them ahead of the game with NPUs, it also opens the door to AMD pushing the boundaries of what NPUs can do, since they’ve had experience with integrating them with CPUs. If you’re wondering about when you’ll get to try out these NPUs, they’ve already been released. AMD dropped the first Ryzen AI 300 laptops in July 2024.


Qualcomm’s Next-Gen NPU: Snapdragon X Elite

Snapdragon-Qualcomm
Qualcomm

Picture of a Qualcomm Snapdragon processor.

One of the earliest chips to hit the market in this new generation of NPUs was the Snapdragon X Elite. While the chip manufacturer is not a stranger to developing mobile chips, this is the company’s first foray into the laptop market. Snapdragon has dedicated a lot of resources to carving out a piece of the pie and showing that their chip can compete against other well-established manufacturers.

It’s a little less powerful than Intel and AMD’s offerings, with a maximum processing power of 45 TOPS. When the chip was released, it was outperforming everything on the market. However, with AMD and Intel poised to release their new and improved NPUs, the Snapdragon X Elite might be out of its league.


The Snapdragon X Elite is notorious for being more power-hungry than its competitors and the laptops that were released with the chip are a bit louder thanks to the added heat generation from the chip. The X Elite is built on a new processor core that Snapdragon calls “Oryon.”

Oryon is an innovation in itself, since it’s the first new “from-scratch” processor designed in the last few years. It’s slated to be used, not only in Snapdragon’s future NPUs, but also integrated into the mobile chip market. The NPU is a great way to chip out a handhold in the market. However, there have already been issues with Windows-on-Arm software, even though Microsoft has done wonders with its Prism emulation layer.


If you’re interested in testing out what Snapdragon has done with its chips, you can get your hands on laptops right now. The company dropped the laptops in April 2024, and since then, they’ve seen a lot of adoption, despite the high price point. Most people are using these laptops as a starting point to see how NPU-powered laptops work compared to their non-NPU-powered cousins.

Is It Important to Have an NPU on A Laptop?

AI seems to be a buzzword in tech these days, and manufacturers touting NPUs might just be jumping on the bandwagon. Is it necessary for a laptop to have an NPU? Current iterations of AI processing primarily use GPUs to get the job done.

Unless you’re designing an LLM to run on your machine, or you really rely on Microsoft’s Copilot, the current generation of NPUs aren’t necessary. They are, however, a sign of things to come. While you might not need an NPU on your machine today, it’s only a matter of time before having one is a requirement to get certain jobs done.

Leave a Comment

url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url