Mistral Large 3 (MoE)
Mistral Large 3 (MoE) is a mixture-of-experts (MoE) transformer language model from the Mistral family, containing 675B parameters across 88 layers. It has 675B total parameters loaded into VRAM with 41B active per token. It supports up to …
675.0B
Parameters
41.0B
Active
256K
Max Context
MoE
Architecture
—
Released
Text
Modality
About Mistral Large 3 (MoE)
Mistral Large 3 (MoE) is a mixture-of-experts (MoE) transformer language model from the Mistral family, containing 675B parameters across 88 layers. It has 675B total parameters loaded into VRAM with 41B active per token. It supports up to 262K tokens of context with a hidden dimension of 12288 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE: 128 experts, top-4 routing. Server class.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 349.2Cluster / Multi-GPU | 416.0Cluster / Multi-GPU | 436.9Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 698.1Cluster / Multi-GPU | 764.9Cluster / Multi-GPU | 785.8Cluster / Multi-GPU |
F162.00 B/W Reference | 1395.9Cluster / Multi-GPU | 1462.7Cluster / Multi-GPU | 1483.6Cluster / Multi-GPU |
Other Mistral Models
View AllFind the right GPU for Mistral Large 3 (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.