OLMo 3 32B
OLMo 3 32B is a dense transformer language model from the AI2 OLMo family, containing 32B parameters across 64 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attent…
32.0B
Parameters
32K
Max Context
Dense
Architecture
—
Released
Text
Modality
About OLMo 3 32B
OLMo 3 32B is a dense transformer language model from the AI2 OLMo family, containing 32B parameters across 64 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attention (GQA). Fully open research model. Instruction/thinking variants.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 32K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 16.79Consumer GPU | 24.54Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 33.33Datacenter GPU | 41.08Datacenter GPU |
F162.00 B/W Reference | 66.41Datacenter GPU | 74.16Datacenter GPU |
Other AI2 OLMo Models
View AllFind the right GPU for OLMo 3 32B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.