Top Qs
Timeline
Chat
Perspective

Neural processing unit

Hardware acceleration unit for artificial intelligence tasks From Wikipedia, the free encyclopedia

Neural processing unit
Remove ads

A neural processing unit (NPU), also known as AI accelerator or deep learning processor, is a class of specialized hardware accelerator[1] or computer system[2][3] designed to accelerate artificial intelligence (AI) and machine learning applications, including artificial neural networks and computer vision.

Thumb
View of the high-performance graphics card NVIDIA H100. The NVIDIA H100 PCIe has the following features: RAM memory type: HBM2, RAM: 80 GiB, memory speed: 2 TiB/s, 14592 Cuda cores. The cards usually run in a network or cluster and cost several tens of thousands of dollars per card.
Remove ads

Use

Summarize
Perspective

Their purpose is either to efficiently execute already trained AI models (inference) or to train AI models. Their applications include algorithms for robotics, Internet of things, and data-intensive or sensor-driven tasks.[4] They are often manycore or spatial designs and focus on low-precision arithmetic, novel dataflow architectures, or in-memory computing capability. As of 2024, a widely used datacenter-grade AI integrated circuit chip, the Nvidia H100 GPU, contains tens of billions of MOSFETs.[5]

Consumer devices

AI accelerators are used in mobile devices such as Apple iPhones, AMD AI engines[6] in Versal and NPUs, Huawei, and Google Pixel smartphones,[7] and seen in many Apple silicon, Qualcomm, Samsung, and Google Tensor smartphone processors.[8]

Thumb
The Google Tensor Processing Unit (TPU) v4 package (ASIC in center plus 4 HBM stacks) and printed circuit board (PCB) with 4 liquid-cooled packages; the board's front panel has 4 top-side PCIe connectors (2023).

It is more recently (circa 2022) added to computer processors from Intel,[9] AMD,[10] and Apple silicon.[11] All models of Intel Meteor Lake processors have a built-in versatile processor unit (VPU) for accelerating inference for computer vision and deep learning.[12]

On consumer devices, the NPU is intended to be small, power-efficient, but reasonably fast when used to run small models. To do this they are designed to support low-bitwidth operations using data types such as INT4, INT8, FP8, and FP16. A common metric is trillions of operations per second (TOPS), though this metric alone does not quantify which kind of operations are being performed.[13]

Datacenters

Accelerators are used in cloud computing servers, including tensor processing units (TPU) in Google Cloud Platform[14] and Trainium and Inferentia chips in Amazon Web Services.[15] Many vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

Since the late 2010s, graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware in the form of dedicated functional units for low-precision matrix-multiplication operations. These GPUs are commonly used as AI accelerators, both for training and inference.[16]

Scientific computation

Thumb
A Hailo AI Accelerator Module attached to a Raspberry Pi 5 via an M.2 adapter hat (2024)

Although NPUs are tailored for low-precision (e.g. FP16, INT8) matrix multiplication operations, they can be used to emulate higher-precision matrix multiplications in scientific computing. As modern GPUs place much focus on making the NPU part fast, using emulated FP64 (Ozaki scheme) on NPUs can potentially outperform native FP64: this has been demonstrated using FP16-emulated FP64 on NVIDIA TITAN RTX and using INT8-emulated FP64 on NVIDIA consumer GPUs and the A100 GPU. (Consumer GPUs are especially benefitted by this scheme as they have small amounts of FP64 hardware capacity, showing a 6× speedup.)[17] Since CUDA Toolkit 13.0 Update 2, cuBLAS automatically uses INT8-emulated FP64 matrix multiplication of the equivalent precision if it's faster than native. This is in addition to the FP16-emulated FP32 feature introduced in version 12.9.[18]

Remove ads

Programming

Mobile NPU vendors typically provide their own application programming interface such as the Snapdragon Neural Processing Engine. An operating system or a higher-level library may provide a more generic interface such as TensorFlow Lite with LiteRT Next (Android) or CoreML (iOS, macOS).

Consumer CPU-integrated NPUs are accessible through vendor-specific APIs. AMD (Ryzen AI), Intel (OpenVINO), Apple silicon (CoreML)[a] each have their own APIs, which can be built upon by a higher-level library.

GPUs generally use existing GPGPU pipelines such as CUDA and OpenCL adapted for lower precisions. Custom-built systems such as the Google TPU use private interfaces.

Remove ads

Notes

  1. MLX builds atop the CPU and GPU parts, not the Apple Neural Engine (ANE) part of Apple Silicon chips. The relatively good performance is due to the use of a large, fast unified memory design.

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads