QwenDenseApache 2.0

Qwen 3 32B

Qwen 3 32B is the dense flagship of the Qwen 3 family. At 32.76B parameters with hybrid reasoning (thinking mode toggle), it competes directly with Llama 3.3 70B on coding and reasoning tasks while using half the VRAM. At Q4_K_M it needs ~1

32.8B

Parameters

128K

Max Context

Dense

Architecture

Apr 29, 2025

Released

Text

Modality

About Qwen 3 32B

Qwen 3 32B is the dense flagship of the Qwen 3 family. At 32.76B parameters with hybrid reasoning (thinking mode toggle), it competes directly with Llama 3.3 70B on coding and reasoning tasks while using half the VRAM. At Q4_K_M it needs ~18 GB, fitting entirely on 24 GB GPUs. The Apache 2.0 license, 128K context, and strong multilingual support make it a top-tier choice for local deployment on consumer hardware.

General PurposeCodeReasoningMultilingualCommercial

Technical Specifications

Total Parameters32.8B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers64
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
17.18Consumer GPU
48.93Datacenter GPU
Q8_01.00 B/W
~100% of FP16
34.12Datacenter GPU
65.87Datacenter GPU
F162.00 B/W
Reference
67.98Datacenter GPU
99.73Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3 32B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.