PhiDenseMIT

Phi-4 14B

Phi-4 14B is Microsoft's math and reasoning specialist. Despite its modest 14.7B parameter count, it achieves performance competitive with 70B models on mathematical reasoning, logic puzzles, and structured problem-solving. The MIT license

14.7B

Parameters

16K

Max Context

Dense

Architecture

Dec 12, 2024

Released

Text

Modality

About Phi-4 14B

Phi-4 14B is Microsoft's math and reasoning specialist. Despite its modest 14.7B parameter count, it achieves performance competitive with 70B models on mathematical reasoning, logic puzzles, and structured problem-solving. The MIT license makes it safe for commercial use. At ~8 GB VRAM at Q4_K_M, it fits on 12 GB GPUs. The trade-off: it is less strong on creative writing, general chat, and world knowledge compared to general-purpose models of similar size. Best used as a specialized reasoning tool alongside a general-purpose model.

MathReasoningSTEMLogicCommercial

Technical Specifications

Total Parameters14.7B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers40
Attention Heads40
KV Headsn_kv = 10
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx16K ctx
Q4_K_M0.50 B/W
~97% of FP16
7.79Consumer GPU
10.72Consumer GPU
Q8_01.00 B/W
~100% of FP16
15.39Consumer GPU
18.32Consumer GPU
F162.00 B/W
Reference
30.59Datacenter GPU
33.52Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Phi Models

View All

Find the right GPU for Phi-4 14B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.