Command R+ 104B
Command R+ 104B is a dense transformer language model from the Cohere family, containing 104B parameters across 64 layers. It supports up to 128K tokens of context with a hidden dimension of 12288 and 8 KV heads for efficient grouped-query …
104.0B
Parameters
125K
Max Context
Dense
Architecture
—
Released
Text
Modality
About Command R+ 104B
Command R+ 104B is a dense transformer language model from the Cohere family, containing 104B parameters across 64 layers. It supports up to 128K tokens of context with a hidden dimension of 12288 and 8 KV heads for efficient grouped-query attention (GQA).
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 125K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 54.01Datacenter GPU | 85.01Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 107.8Cluster / Multi-GPU | 138.8Cluster / Multi-GPU |
F162.00 B/W Reference | 215.3Cluster / Multi-GPU | 246.3Cluster / Multi-GPU |
Other Cohere Models
View AllFind the right GPU for Command R+ 104B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.