Qwen 3 14B
Qwen 3 14B is a dense transformer language model from the Qwen family, containing 14.77B parameters across 40 layers. It supports up to 131K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attent…
14.8B
Parameters
128K
Max Context
Dense
Architecture
—
Released
Text
Modality
About Qwen 3 14B
Qwen 3 14B is a dense transformer language model from the Qwen family, containing 14.77B parameters across 40 layers. It supports up to 131K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Dense 14B. Excellent workstation model.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 128K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 7.79Consumer GPU | 27.63Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 15.43Consumer GPU | 35.27Datacenter GPU |
F162.00 B/W Reference | 30.69Datacenter GPU | 50.54Datacenter GPU |
Other Qwen Models
View AllFind the right GPU for Qwen 3 14B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.