LFM2 1.2B
LFM2 1.2B is a dense transformer language model from the Liquid family, containing 1.2B parameters across 24 layers. It supports up to 33K tokens of context with a hidden dimension of 2048 and 8 KV heads for efficient grouped-query attentio…
1.2B
Parameters
32K
Max Context
Dense
Architecture
—
Released
Text
Modality
About LFM2 1.2B
LFM2 1.2B is a dense transformer language model from the Liquid family, containing 1.2B parameters across 24 layers. It supports up to 33K tokens of context with a hidden dimension of 2048 and 8 KV heads for efficient grouped-query attention (GQA). LFM Open License (Apache 2.0 based). On-device hybrid model. Fast CPU/mobile inference.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 32K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 0.71Consumer GPU | 3.62Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 1.33Consumer GPU | 4.24Consumer GPU |
F162.00 B/W Reference | 2.57Consumer GPU | 5.48Consumer GPU |
Find the right GPU for LFM2 1.2B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.