Top Qs
Timeline
Chat
Perspective
Llama.cpp
Software library for LLM inference From Wikipedia, the free encyclopedia
Remove ads
llama.cpp is an open source software library that performs inference on various large language models such as Llama.[3] It is co-developed alongside the GGML project, a general-purpose tensor library.[4]
Command-line tools are included with the library,[5] alongside a server with a simple web interface.[6][7]
Remove ads
Background
Towards the end of September 2022, Georgi Gerganov started work on the GGML library, a C library implementing tensor algebra. Gerganov developed the library with the intention of strict memory management and multi-threading. The creation of GGML was inspired by Fabrice Bellard's work on LibNC.[8]
Before llama.cpp, Gerganov worked on a similar library called whisper.cpp which implemented Whisper, a speech to text model by OpenAI.[9]
Remove ads
Development
llama.cpp began development in March 2023 by Georgi Gerganov as an implementation of the Llama inference code in pure C/C++ with no dependencies. This improved performance on computers without GPU or other dedicated hardware, which was a goal of the project.[3][10][11] llama.cpp gained traction with users who lacked specialized hardware, as it could run on just a CPU, including on Android devices.[10][12][13] While initially designed for CPUs, GPU inference support was later added.[14] As of August 2025 it has more than 85,000 stars on GitHub.[15]
On Apr 30, 2024, FlashAttention was introduced.
On Apr 10, 2025, libmtmd was introduced, which reinvigorated support for multimodal models that has been stagnant previously.
Remove ads
Architecture
llama.cpp supports multiple hardware targets, including x86, ARM, Metal, BLAS, BLIS, SYCL, MUSA, CUDA, HIP, CANN, OpenCL, RPC and Vulkan (version 1.2 or greater).[16][17][18][19] These back-ends make up the GGML tensor library which is used by the front-end model-specific llama.cpp code.[20] llama.cpp makes use of several CPU extensions for optimization: AVX, AVX2 and AVX-512 for X86-64, and Neon on ARM. Apple silicon is an important target for the project.[15][21]
llama.cpp supports a variety of features aimed at inference on edge devices, such as:
- Ahead of time model quantization and on-the-fly kv-cache quantization.[22]
- Speculative decoding.[7]
- Partial offloading of model layers to system RAM, allowing devices to load models that would be too large to fit solely in GPU VRAM.
In addition, llama.cpp supports a variety of features and APIs for frontend communication, such as:
GGUF file format
Summarize
Perspective
The GGUF (GGML Universal File)[25] file format is a binary format that stores both tensors and metadata in a single file, and is designed for fast saving, and loading of model data.[26] It was introduced in August 2023 by the llama.cpp project to better maintain backwards compatibility as support was added for other model architectures.[14][27] It superseded previous formats used by the project such as GGML.
GGUF files are typically created by converting models developed with a different machine learning library such as PyTorch.[26]
Design
GGUF focuses on quantization, the act of reducing precision in the model weights. This can lead to reduced memory usage and increased speed, albeit at the cost of reduced model accuracy.[28][27]
GGUF supports 2-bit to 8-bit quantized integer types,[29] common floating-point data formats such as float32, float16, and bfloat16, and 1.58 bit quantization.[5]
GGUF contains information necessary for running a GPT-like language model such as the tokenizer vocabulary, context length, tensor info and other attributes.[30]
Byte-level structure (little-endian)
Metadata block
// example metadata
general.architecture: 'llama',
general.name: 'LLaMA v2',
llama.context_length: 4096,
... ,
general.file_type: 10, // (typically indicates quantization level, here "MOSTLY_Q2_K")
tokenizer.ggml.model: 'llama',
tokenizer.ggml.tokens: [
'<unk>', '<s>', '</s>', '<0x00>', '<0x01>', '<0x02>',
'<0x03>', '<0x04>', '<0x05>', '<0x06>', '<0x07>', '<0x08>',
...
],
...
Tensors info block
// n-th tensor
name: GGUF string, // ex: "blk.0.ffn_gate.weight"
n_dimensions: UINT32, // ex: 2
dimensions: UINT64[], // ex: [ 4096, 32000 ]
type: UINT32, // ex: 10 (typically indicates quantization level, here "GGML_TYPE_Q2_K")
offset: UINT64 // starting position within the tensor_data block, relative to the start of the block
// (n+1)-th tensor
...
Remove ads
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads
