Phi-4-mini 3.8B
Phi-4-mini 3.8B is a dense transformer language model from the Phi family, containing 3.8B parameters across 32 layers. It supports up to 16K tokens of context with a hidden dimension of 3072 and 6 KV heads for efficient grouped-query atten…
3.8B
Parameters
16K
Max Context
Dense
Architecture
—
Released
Text
Modality
About Phi-4-mini 3.8B
Phi-4-mini 3.8B is a dense transformer language model from the Phi family, containing 3.8B parameters across 32 layers. It supports up to 16K tokens of context with a hidden dimension of 3072 and 6 KV heads for efficient grouped-query attention (GQA).
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 16K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 2.06Consumer GPU | 3.46Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 4.02Consumer GPU | 5.43Consumer GPU |
F162.00 B/W Reference | 7.95Consumer GPU | 9.36Consumer GPU |
Other Phi Models
View AllFind the right GPU for Phi-4-mini 3.8B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.