Is an AI PC just a normal laptop or desktop pre-loaded with AI apps? Or does it have some special hardware in it? Let us find out…

From appliances and buildings to yachts and zoos, suddenly, the entire world has become artificial intelligence (AI) powered. But hardly a handful of these are really AI-enabled, the rest being just the smart devices of yore rebranded as AI-powered. So, the overwhelming marketing push for AI personal computers (PCs)—laptops and desktops—does tend to make one wonder whether it is just a hoax or if there is something to it!
What ‘really’ is an AI PC?
While some brands may tend to pre-load AI assistants or AI app suites in their existing laptop and desktop models and sell them as AI PCs, a true AI PC is one that is equipped with the right hardware, and not merely the software, to handle AI and machine learning (ML) inference locally on the device, without going online or connecting to a cloud. At the heart of an AI PC is an interesting piece of hardware called the neural processing unit (NPU). It might be known by different names like AI accelerator, AI core, or neural engine, but the NPU is what makes AI tick in edge devices.
We all know what central processing units (CPUs) and graphics processing units (GPUs) are good at. To use a common cliche, the CPU is the brain of the computer. It is great at handling sequential tasks and managing the operating system, general applications, input/output, and the entire workflow inside a computer. GPUs, on the other hand, excel in parallel processing, like analysing large amounts of data, rendering images, editing videos, training AI models, and so on. They are commonly found in enterprise and gaming PCs.
With the AI boom, the logical progression was to use the same GPUs for developing and running AI models locally on PCs. But, while GPUs are good for training AI models, engineers soon realised they are not really cut out for repetitive AI inference. As GPUs are power-hungry creatures, using them for on-device AI/ML inference tends to drain the battery and heat up the device. This leads to power and thermal throttling, that is, a reduction in clock speeds and a slowing down of the GPU to protect the hardware. Plus, repetitive inference tends to hog the GPUs, leaving them unavailable for tasks they are originally meant to do!
This dilemma led to the development of neural processing units or NPUs—chips specialising in repetitive AI/ML tasks—to work alongside CPUs and GPUs in AI devices. NPUs have massive parallel processing abilities, but are smaller, less expensive, have a low latency, and a much greater power efficiency than GPUs. The development of NPUs is a critical milestone in AI history, as it has given edge AI a much-needed boost.






