GemmaDenseGemma License

Gemma 3 1B

Gemma 3 1B is a dense transformer language model from the Gemma family, containing 1B parameters across 26 layers. It supports up to 33K tokens of context with a hidden dimension of 1152 and 3 KV heads for efficient grouped-query attention

1.0B

Parameters

32K

Max Context

Dense

Architecture

Released

Text

Modality

About Gemma 3 1B

Gemma 3 1B is a dense transformer language model from the Gemma family, containing 1B parameters across 26 layers. It supports up to 33K tokens of context with a hidden dimension of 1152 and 3 KV heads for efficient grouped-query attention (GQA).

On-DeviceBasic Chat

Technical Specifications

Total Parameters1.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 1,152
Transformer Layers26
Attention Heads12
KV Headsn_kv = 3
Head Dimensiond_head = 96
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
0.55Consumer GPU
1.43Consumer GPU
Q8_01.00 B/W
~100% of FP16
1.06Consumer GPU
1.95Consumer GPU
F162.00 B/W
Reference
2.10Consumer GPU
2.98Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Gemma Models

View All

Find the right GPU for Gemma 3 1B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.