DeepSeek V3 (MoE)
DeepSeek V3 is the non-reasoning base model sharing R1's architecture — 671B total MoE with 37B active per token using MLA for compressed KV cache. It delivers frontier-class general performance on par with GPT-4o and Claude 3.5 Sonnet. MIT…
671.0B
Parameters
37.0B
Active
64K
Max Context
MoE
Architecture
Dec 26, 2024
Released
Text
Modality
About DeepSeek V3 (MoE)
DeepSeek V3 is the non-reasoning base model sharing R1's architecture — 671B total MoE with 37B active per token using MLA for compressed KV cache. It delivers frontier-class general performance on par with GPT-4o and Claude 3.5 Sonnet. MIT licensed, making it the most capable truly open-weight model available. Like R1, the full model requires server/cluster hardware (~370 GB at Q4_K_M), but its architecture innovations (MLA, auxiliary-loss-free load balancing, multi-token prediction) have influenced the entire field.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 64K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 347.1Cluster / Multi-GPU | 362.1Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 693.9Cluster / Multi-GPU | 708.9Cluster / Multi-GPU |
F162.00 B/W Reference | 1387.6Cluster / Multi-GPU | 1402.6Cluster / Multi-GPU |
Other DeepSeek Models
View AllDeepSeek R1 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V3 0324 (MoE)
Params
685.0B
Layers
61
Context
64K
DeepSeek V4-Pro (MoE)
Params
1.6T
Layers
80
Context
1.0M
DeepSeek V4-Flash (MoE)
Params
284.0B
Layers
48
Context
1.0M
DeepSeek R1 Distill Qwen 1.5B
Params
1.5B
Layers
28
Context
32K
DeepSeek R1 Distill Qwen 7B
Params
7.6B
Layers
28
Context
32K
Find the right GPU for DeepSeek V3 (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.