DeepSeekDenseMIT

DeepSeek R1 Distill Qwen 14B

DeepSeek R1 Distill Qwen 14B is a dense transformer language model from the DeepSeek family, containing 14.7B parameters across 48 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient

14.7B

Parameters

32K

Max Context

Dense

Architecture

Released

Text

Modality

About DeepSeek R1 Distill Qwen 14B

DeepSeek R1 Distill Qwen 14B is a dense transformer language model from the DeepSeek family, containing 14.7B parameters across 48 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attention (GQA). Reasoning distilled into Qwen 2.5 14B base.

Reasoning

Technical Specifications

Total Parameters14.7B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers48
Attention Heads40
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
7.79Consumer GPU
13.60Consumer GPU
Q8_01.00 B/W
~100% of FP16
15.38Consumer GPU
21.20Consumer GPU
F162.00 B/W
Reference
30.58Datacenter GPU
36.39Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other DeepSeek Models

View All

Find the right GPU for DeepSeek R1 Distill Qwen 14B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.