DeepSeekDenseMIT

DeepSeek R1 Distill Qwen 32B

DeepSeek R1 Distill Qwen 32B brings chain-of-thought reasoning to consumer hardware. It is Qwen 2.5 32B fine-tuned on reasoning traces from the full DeepSeek R1 model. At 32.5B parameters and ~18 GB VRAM at Q4_K_M, it fits on 24 GB GPUs wit

32.5B

Parameters

32K

Max Context

Dense

Architecture

Jan 20, 2025

Released

Text

Modality

About DeepSeek R1 Distill Qwen 32B

DeepSeek R1 Distill Qwen 32B brings chain-of-thought reasoning to consumer hardware. It is Qwen 2.5 32B fine-tuned on reasoning traces from the full DeepSeek R1 model. At 32.5B parameters and ~18 GB VRAM at Q4_K_M, it fits on 24 GB GPUs with room for context. Delivers exceptional math and logic performance for its size — competitive with 70B dense models on reasoning benchmarks. The MIT license makes it safe for commercial use. This is the best local reasoning model for 24 GB GPU owners.

ReasoningMathSTEMCodeCommercial

Technical Specifications

Total Parameters32.5B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers64
Attention Heads40
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
17.05Consumer GPU
24.80Datacenter GPU
Q8_01.00 B/W
~100% of FP16
33.85Datacenter GPU
41.60Datacenter GPU
F162.00 B/W
Reference
67.44Datacenter GPU
75.19Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other DeepSeek Models

View All

Find the right GPU for DeepSeek R1 Distill Qwen 32B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.