Ministral 3 3B
Ministral 3 3B is a dense transformer language model from the Mistral family, containing 3B parameters across 26 layers. It supports up to 262K tokens of context with a hidden dimension of 3072 and 8 KV heads for efficient grouped-query att…
3.0B
Parameters
256K
Max Context
Dense
Architecture
—
Released
Text
Modality
About Ministral 3 3B
Ministral 3 3B is a dense transformer language model from the Mistral family, containing 3B parameters across 26 layers. It supports up to 262K tokens of context with a hidden dimension of 3072 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Cascade-distilled from Mistral Small 3.1.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 1.65Consumer GPU | 21.39Consumer GPU | 27.55Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 3.20Consumer GPU | 22.94Consumer GPU | 29.10Datacenter GPU |
F162.00 B/W Reference | 6.30Consumer GPU | 26.04Datacenter GPU | 32.20Datacenter GPU |
Other Mistral Models
View AllFind the right GPU for Ministral 3 3B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.