MistralDenseApache 2.0

Ministral 3 3B

Ministral 3 3B is a dense transformer language model from the Mistral family, containing 3B parameters across 26 layers. It supports up to 262K tokens of context with a hidden dimension of 3072 and 8 KV heads for efficient grouped-query att

3.0B

Parameters

256K

Max Context

Dense

Architecture

Released

Text

Modality

About Ministral 3 3B

Ministral 3 3B is a dense transformer language model from the Mistral family, containing 3B parameters across 26 layers. It supports up to 262K tokens of context with a hidden dimension of 3072 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Cascade-distilled from Mistral Small 3.1.

General PurposeChat

Technical Specifications

Total Parameters3.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 3,072
Transformer Layers26
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
1.65Consumer GPU
21.39Consumer GPU
27.55Datacenter GPU
Q8_01.00 B/W
~100% of FP16
3.20Consumer GPU
22.94Consumer GPU
29.10Datacenter GPU
F162.00 B/W
Reference
6.30Consumer GPU
26.04Datacenter GPU
32.20Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Mistral Models

View All

Find the right GPU for Ministral 3 3B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.