BigCodeDenseOpenRAIL BigCode License

StarCoder2 7B

StarCoder2 7B is a dense transformer language model from the BigCode family, containing 7B parameters across 32 layers. It supports up to 16K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query atten

7.0B

Parameters

16K

Max Context

Dense

Architecture

Released

Text

Modality

About StarCoder2 7B

StarCoder2 7B is a dense transformer language model from the BigCode family, containing 7B parameters across 32 layers. It supports up to 16K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query attention (GQA). OpenRAIL BigCode license. Code completion/instruct. Mature local code base.

Code

Technical Specifications

Total Parameters7.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 4,096
Transformer Layers32
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx16K ctx
Q4_K_M0.50 B/W
~97% of FP16
3.74Consumer GPU
5.62Consumer GPU
Q8_01.00 B/W
~100% of FP16
7.36Consumer GPU
9.24Consumer GPU
F162.00 B/W
Reference
14.60Consumer GPU
16.47Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other BigCode Models

View All

Find the right GPU for StarCoder2 7B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.