LiquidDenseApache 2.0

LFM2 1.2B

LFM2 1.2B is a dense transformer language model from the Liquid family, containing 1.2B parameters across 24 layers. It supports up to 33K tokens of context with a hidden dimension of 2048 and 8 KV heads for efficient grouped-query attentio

1.2B

Parameters

32K

Max Context

Dense

Architecture

Released

Text

Modality

About LFM2 1.2B

LFM2 1.2B is a dense transformer language model from the Liquid family, containing 1.2B parameters across 24 layers. It supports up to 33K tokens of context with a hidden dimension of 2048 and 8 KV heads for efficient grouped-query attention (GQA). LFM Open License (Apache 2.0 based). On-device hybrid model. Fast CPU/mobile inference.

On-Device

Technical Specifications

Total Parameters1.2B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 2,048
Transformer Layers24
Attention Heads16
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
0.71Consumer GPU
3.62Consumer GPU
Q8_01.00 B/W
~100% of FP16
1.33Consumer GPU
4.24Consumer GPU
F162.00 B/W
Reference
2.57Consumer GPU
5.48Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Find the right GPU for LFM2 1.2B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.