GemmaDenseGemma License

Gemma 4 31B

Gemma 4 31B is a dense transformer language model from the Gemma family, containing 30.7B parameters across 60 layers. It supports up to 262K tokens of context with a hidden dimension of 5632 and 8 KV heads for efficient grouped-query atten

30.7B

Parameters

256K

Max Context

Dense

Architecture

Released

Text

Modality

About Gemma 4 31B

Gemma 4 31B is a dense transformer language model from the Gemma family, containing 30.7B parameters across 60 layers. It supports up to 262K tokens of context with a hidden dimension of 5632 and 8 KV heads for efficient grouped-query attention (GQA). Dense 31B. Hybrid local+global attn. Dual RoPE. TurboQuant 3-bit KV. 256K ctx. #3 open model on Arena.

General PurposeCode

Technical Specifications

Total Parameters30.7B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,632
Transformer Layers60
Attention Heads44
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
16.10Consumer GPU
61.64Datacenter GPU
75.87Datacenter GPU
Q8_01.00 B/W
~100% of FP16
31.97Datacenter GPU
77.51Datacenter GPU
91.74Cluster / Multi-GPU
F162.00 B/W
Reference
63.71Datacenter GPU
109.2Cluster / Multi-GPU
123.5Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Gemma Models

View All

Find the right GPU for Gemma 4 31B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.