QwenDenseApache 2.0

Qwen 3.6 27B

Qwen 3.6 27B is a dense transformer language model from the Qwen family, containing 27B parameters across 16 layers. It supports up to 262K tokens of context with a hidden dimension of 5120 and 4 KV heads for efficient grouped-query attenti

27.0B

Parameters

256K

Max Context

Dense

Architecture

Released

Text

Modality

About Qwen 3.6 27B

Qwen 3.6 27B is a dense transformer language model from the Qwen family, containing 27B parameters across 16 layers. It supports up to 262K tokens of context with a hidden dimension of 5120 and 4 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Dense 27B coding specialist. Hybrid DeltaNet+Attn (25% KV cache layers).

Code

Technical Specifications

Total Parameters27.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers16
Attention Heads24
KV Headsn_kv = 4
Head Dimensiond_head = 256
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
14.02Consumer GPU
26.16Datacenter GPU
29.96Datacenter GPU
Q8_01.00 B/W
~100% of FP16
27.97Datacenter GPU
40.12Datacenter GPU
43.91Datacenter GPU
F162.00 B/W
Reference
55.89Datacenter GPU
68.03Datacenter GPU
71.82Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3.6 27B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.