QwenMoEApache 2.0

Qwen 3.5 397B-A17B (MoE)

Qwen 3.5 397B-A17B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 397B parameters across 15 layers. It has 397B total parameters loaded into VRAM with 17B active per token. It supports up to

397.0B

Parameters

17.0B

Active

256K

Max Context

MoE

Architecture

Released

Text

Modality

About Qwen 3.5 397B-A17B (MoE)

Qwen 3.5 397B-A17B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 397B parameters across 15 layers. It has 397B total parameters loaded into VRAM with 17B active per token. It supports up to 262K tokens of context with a hidden dimension of 4096 and 2 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE flagship: 512 experts. DeltaNet+MoE hybrid. Server class.

ResearchEnterprise

Technical Specifications

Total Parameters397.0B
Active Parameters17.0B per token
ArchitectureMixture of Experts
Total Experts17
Attention TypeGQA (MoE)
Hidden Dimensiond = 4,096
Transformer Layers15
Attention Heads32
KV Headsn_kv = 2
Head Dimensiond_head = 256
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
205.2Cluster / Multi-GPU
210.9Cluster / Multi-GPU
212.7Cluster / Multi-GPU
Q8_01.00 B/W
~100% of FP16
410.4Cluster / Multi-GPU
416.1Cluster / Multi-GPU
417.9Cluster / Multi-GPU
F162.00 B/W
Reference
820.8Cluster / Multi-GPU
826.5Cluster / Multi-GPU
828.3Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3.5 397B-A17B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.