DeepSeek V4-Flash (MoE)
DeepSeek V4-Flash (MoE) is a mixture-of-experts (MoE) transformer language model from the DeepSeek family, containing 284B parameters across 48 layers. It has 284B total parameters loaded into VRAM with 13B active per token. It supports up …
284.0B
Parameters
13.0B
Active
1.0M
Max Context
MoE
Architecture
—
Released
Text
Modality
About DeepSeek V4-Flash (MoE)
DeepSeek V4-Flash (MoE) is a mixture-of-experts (MoE) transformer language model from the DeepSeek family, containing 284B parameters across 48 layers. It has 284B total parameters loaded into VRAM with 13B active per token. It supports up to 1.0M tokens of context with a hidden dimension of 6144 and 8 KV heads for efficient grouped-query attention (GQA). April 2026. 284B total / 13B active. 1M context. Economical V4 variant. High-memory server class.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 1.0M ctx | 1.0M ctx |
|---|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 147.0Cluster / Multi-GPU | 183.4Cluster / Multi-GPU | 329.9Cluster / Multi-GPU | 338.8Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 293.8Cluster / Multi-GPU | 330.2Cluster / Multi-GPU | 476.7Cluster / Multi-GPU | 485.6Cluster / Multi-GPU |
F162.00 B/W Reference | 587.4Cluster / Multi-GPU | 623.8Cluster / Multi-GPU | 770.3Cluster / Multi-GPU | 779.2Cluster / Multi-GPU |
Other DeepSeek Models
View AllDeepSeek R1 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V3 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V3 0324 (MoE)
Params
685.0B
Layers
61
Context
64K
DeepSeek V4-Pro (MoE)
Params
1.6T
Layers
80
Context
1.0M
DeepSeek R1 Distill Qwen 1.5B
Params
1.5B
Layers
28
Context
32K
DeepSeek R1 Distill Qwen 7B
Params
7.6B
Layers
28
Context
32K
Find the right GPU for DeepSeek V4-Flash (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.