QwenDenseApache 2.0

Qwen 2.5 0.5B

Qwen 2.5 0.5B is a dense transformer language model from the Qwen family, containing 0.49B parameters across 24 layers. It supports up to 33K tokens of context with a hidden dimension of 896 and 2 KV heads for efficient grouped-query attent

490M

Parameters

32K

Max Context

Dense

Architecture

Released

Text

Modality

About Qwen 2.5 0.5B

Qwen 2.5 0.5B is a dense transformer language model from the Qwen family, containing 0.49B parameters across 24 layers. It supports up to 33K tokens of context with a hidden dimension of 896 and 2 KV heads for efficient grouped-query attention (GQA).

On-DeviceBasic Chat

Technical Specifications

Total Parameters490M
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 896
Transformer Layers24
Attention Heads14
KV Headsn_kv = 2
Head Dimensiond_head = 64
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
0.26Consumer GPU
0.63Consumer GPU
Q8_01.00 B/W
~100% of FP16
0.52Consumer GPU
0.88Consumer GPU
F162.00 B/W
Reference
1.02Consumer GPU
1.39Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 2.5 0.5B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.