QwenDenseApache 2.0

Qwen 3.5 0.8B

Qwen 3.5 0.8B is a dense transformer language model from the Qwen family, containing 0.8B parameters across 6 layers. It supports up to 262K tokens of context with a hidden dimension of 1024 and 2 KV heads for efficient grouped-query attent

800M

Parameters

256K

Max Context

Dense

Architecture

Released

Text

Modality

About Qwen 3.5 0.8B

Qwen 3.5 0.8B is a dense transformer language model from the Qwen family, containing 0.8B parameters across 6 layers. It supports up to 262K tokens of context with a hidden dimension of 1024 and 2 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Hybrid DeltaNet+Attn (25% layers KV cache). 262K→1M ctx.

On-DeviceBasic Chat

Technical Specifications

Total Parameters800M
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 1,024
Transformer Layers6
Attention Heads8
KV Headsn_kv = 2
Head Dimensiond_head = 256
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
0.43Consumer GPU
2.70Consumer GPU
3.41Consumer GPU
Q8_01.00 B/W
~100% of FP16
0.84Consumer GPU
3.12Consumer GPU
3.83Consumer GPU
F162.00 B/W
Reference
1.67Consumer GPU
3.94Consumer GPU
4.65Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3.5 0.8B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.