QwenMoEApache 2.0

Qwen 3 235B-A22B (MoE)

Qwen 3 235B-A22B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 235B parameters across 96 layers. It has 235B total parameters loaded into VRAM with 22B active per token. It supports up to 13

235.0B

Parameters

22.0B

Active

128K

Max Context

MoE

Architecture

Released

Text

Modality

About Qwen 3 235B-A22B (MoE)

Qwen 3 235B-A22B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 235B parameters across 96 layers. It has 235B total parameters loaded into VRAM with 22B active per token. It supports up to 131K tokens of context with a hidden dimension of 8192 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE flagship. Server class.

ResearchEnterprise

Technical Specifications

Total Parameters235.0B
Active Parameters22.0B per token
ArchitectureMixture of Experts
Total Experts22
Attention TypeGQA (MoE)
Hidden Dimensiond = 8,192
Transformer Layers96
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
121.8Cluster / Multi-GPU
169.5Cluster / Multi-GPU
Q8_01.00 B/W
~100% of FP16
243.3Cluster / Multi-GPU
290.9Cluster / Multi-GPU
F162.00 B/W
Reference
486.2Cluster / Multi-GPU
533.9Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3 235B-A22B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.