AI2 OLMoDenseOpen Weights

OLMo 3 32B

OLMo 3 32B is a dense transformer language model from the AI2 OLMo family, containing 32B parameters across 64 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attent

32.0B

Parameters

32K

Max Context

Dense

Architecture

Released

Text

Modality

About OLMo 3 32B

OLMo 3 32B is a dense transformer language model from the AI2 OLMo family, containing 32B parameters across 64 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attention (GQA). Fully open research model. Instruction/thinking variants.

Reasoning

Technical Specifications

Total Parameters32.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers64
Attention Heads40
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
16.79Consumer GPU
24.54Datacenter GPU
Q8_01.00 B/W
~100% of FP16
33.33Datacenter GPU
41.08Datacenter GPU
F162.00 B/W
Reference
66.41Datacenter GPU
74.16Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other AI2 OLMo Models

View All

Find the right GPU for OLMo 3 32B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.