PhiDenseMIT

Phi-4-multimodal 5.6B

Phi-4-multimodal 5.6B is a dense transformer language model from the Phi family, containing 5.6B parameters across 36 layers. It supports up to 16K tokens of context with a hidden dimension of 3584 and 7 KV heads for efficient grouped-query

5.6B

Parameters

16K

Max Context

Dense

Architecture

Released

Text + Audio + Vision

Modality

About Phi-4-multimodal 5.6B

Phi-4-multimodal 5.6B is a dense transformer language model from the Phi family, containing 5.6B parameters across 36 layers. It supports up to 16K tokens of context with a hidden dimension of 3584 and 7 KV heads for efficient grouped-query attention (GQA). MIT license. Image + audio + text multimodal. Good compact multimodal local.

General PurposeChat

Technical Specifications

Total Parameters5.6B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 3,584
Transformer Layers36
Attention Heads28
KV Headsn_kv = 7
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx16K ctx
Q4_K_M0.50 B/W
~97% of FP16
3.02Consumer GPU
4.86Consumer GPU
Q8_01.00 B/W
~100% of FP16
5.91Consumer GPU
7.76Consumer GPU
F162.00 B/W
Reference
11.70Consumer GPU
13.55Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Phi Models

View All

Find the right GPU for Phi-4-multimodal 5.6B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.