CohereDenseCC-BY-NC 4.0

Command R+ 104B

Command R+ 104B is a dense transformer language model from the Cohere family, containing 104B parameters across 64 layers. It supports up to 128K tokens of context with a hidden dimension of 12288 and 8 KV heads for efficient grouped-query

104.0B

Parameters

125K

Max Context

Dense

Architecture

Released

Text

Modality

About Command R+ 104B

Command R+ 104B is a dense transformer language model from the Cohere family, containing 104B parameters across 64 layers. It supports up to 128K tokens of context with a hidden dimension of 12288 and 8 KV heads for efficient grouped-query attention (GQA).

ResearchEnterprise

Technical Specifications

Total Parameters104.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 12,288
Transformer Layers64
Attention Heads96
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx125K ctx
Q4_K_M0.50 B/W
~97% of FP16
54.01Datacenter GPU
85.01Cluster / Multi-GPU
Q8_01.00 B/W
~100% of FP16
107.8Cluster / Multi-GPU
138.8Cluster / Multi-GPU
F162.00 B/W
Reference
215.3Cluster / Multi-GPU
246.3Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Cohere Models

View All

Find the right GPU for Command R+ 104B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.