Qwen 3.6 35B-A3B (MoE)
Qwen 3.6 35B-A3B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 35B parameters across 10 layers. It has 35B total parameters loaded into VRAM with 3B active per token. It supports up to 262K …
35.0B
Parameters
3.0B
Active
256K
Max Context
MoE
Architecture
—
Released
Text
Modality
About Qwen 3.6 35B-A3B (MoE)
Qwen 3.6 35B-A3B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 35B parameters across 10 layers. It has 35B total parameters loaded into VRAM with 3B active per token. It supports up to 262K tokens of context with a hidden dimension of 2048 and 2 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE: 256 experts, 8+1 active. DeltaNet+GA hybrid. 262K ctx, ext to ~1M with YaRN. SWE-bench 73.4.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 18.11Consumer GPU | 21.91Consumer GPU | 23.09Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 36.20Datacenter GPU | 40.00Datacenter GPU | 41.18Datacenter GPU |
F162.00 B/W Reference | 72.38Datacenter GPU | 76.18Datacenter GPU | 77.36Datacenter GPU |
Other Qwen Models
View AllFind the right GPU for Qwen 3.6 35B-A3B (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.