Qwen 3 235B-A22B (MoE)
Qwen 3 235B-A22B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 235B parameters across 96 layers. It has 235B total parameters loaded into VRAM with 22B active per token. It supports up to 13…
235.0B
Parameters
22.0B
Active
128K
Max Context
MoE
Architecture
—
Released
Text
Modality
About Qwen 3 235B-A22B (MoE)
Qwen 3 235B-A22B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 235B parameters across 96 layers. It has 235B total parameters loaded into VRAM with 22B active per token. It supports up to 131K tokens of context with a hidden dimension of 8192 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE flagship. Server class.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 128K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 121.8Cluster / Multi-GPU | 169.5Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 243.3Cluster / Multi-GPU | 290.9Cluster / Multi-GPU |
F162.00 B/W Reference | 486.2Cluster / Multi-GPU | 533.9Cluster / Multi-GPU |
Other Qwen Models
View AllFind the right GPU for Qwen 3 235B-A22B (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.