GemmaMoEGemma License

Gemma 4 26B-A4B (MoE)

Gemma 4 26B-A4B (MoE) is a mixture-of-experts (MoE) transformer language model from the Gemma family, containing 25.2B parameters across 30 layers. It has 25.2B total parameters loaded into VRAM with 3.8B active per token. It supports up to

25.2B

Parameters

3.8B

Active

256K

Max Context

MoE

Architecture

Released

Text

Modality

About Gemma 4 26B-A4B (MoE)

Gemma 4 26B-A4B (MoE) is a mixture-of-experts (MoE) transformer language model from the Gemma family, containing 25.2B parameters across 30 layers. It has 25.2B total parameters loaded into VRAM with 3.8B active per token. It supports up to 262K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query attention (GQA). MoE: 128 experts, 8 active + 1 shared. Sliding window 1K. 256K ctx.

General PurposeCode

Technical Specifications

Total Parameters25.2B
Active Parameters3.8B per token
ArchitectureMixture of Experts
Total Experts3.8
Attention TypeGQA (MoE)
Hidden Dimensiond = 4,096
Transformer Layers30
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
13.14Consumer GPU
35.91Datacenter GPU
43.03Datacenter GPU
Q8_01.00 B/W
~100% of FP16
26.17Datacenter GPU
48.94Datacenter GPU
56.05Datacenter GPU
F162.00 B/W
Reference
52.22Datacenter GPU
74.99Datacenter GPU
82.10Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Gemma Models

View All

Find the right GPU for Gemma 4 26B-A4B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.