LlamaDenseLlama 3.2 Community License

Llama 3.2 1B

Llama 3.2 1B is Meta's ultra-compact model designed for on-device and edge deployment. At just 1.24B parameters across 16 layers, it can run on CPU, mobile phones, and Raspberry Pi-class hardware. While limited in reasoning depth, it handle

1.2B

Parameters

128K

Max Context

Dense

Architecture

Sep 25, 2024

Released

Text

Modality

About Llama 3.2 1B

Llama 3.2 1B is Meta's ultra-compact model designed for on-device and edge deployment. At just 1.24B parameters across 16 layers, it can run on CPU, mobile phones, and Raspberry Pi-class hardware. While limited in reasoning depth, it handles basic chat, summarization, and classification tasks competently. The Q4_K_M quantized version uses only ~700 MB of VRAM.

On-DeviceEdgeClassificationBasic Chat

Technical Specifications

Total Parameters1.2B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 2,048
Transformer Layers16
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 64
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
0.67Consumer GPU
4.64Consumer GPU
Q8_01.00 B/W
~100% of FP16
1.31Consumer GPU
5.28Consumer GPU
F162.00 B/W
Reference
2.59Consumer GPU
6.56Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Llama Models

View All

Find the right GPU for Llama 3.2 1B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.