Gemma 4 26B-A4B (MoE)
Gemma 4 26B-A4B (MoE) is a mixture-of-experts (MoE) transformer language model from the Gemma family, containing 25.2B parameters across 30 layers. It has 25.2B total parameters loaded into VRAM with 3.8B active per token. It supports up to…
25.2B
Parameters
3.8B
Active
256K
Max Context
MoE
Architecture
—
Released
Text
Modality
About Gemma 4 26B-A4B (MoE)
Gemma 4 26B-A4B (MoE) is a mixture-of-experts (MoE) transformer language model from the Gemma family, containing 25.2B parameters across 30 layers. It has 25.2B total parameters loaded into VRAM with 3.8B active per token. It supports up to 262K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query attention (GQA). MoE: 128 experts, 8 active + 1 shared. Sliding window 1K. 256K ctx.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 13.14Consumer GPU | 35.91Datacenter GPU | 43.03Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 26.17Datacenter GPU | 48.94Datacenter GPU | 56.05Datacenter GPU |
F162.00 B/W Reference | 52.22Datacenter GPU | 74.99Datacenter GPU | 82.10Cluster / Multi-GPU |
Other Gemma Models
View AllFind the right GPU for Gemma 4 26B-A4B (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.