Nemotron 3 Nano 30B-A3B (MoE)
Nemotron 3 Nano 30B-A3B (MoE) is a mixture-of-experts (MoE) transformer language model from the Nvidia family, containing 30B parameters across 40 layers. It has 30B total parameters loaded into VRAM with 3B active per token. It supports up…
30.0B
Parameters
3.0B
Active
256K
Max Context
MoE
Architecture
—
Released
Text
Modality
About Nemotron 3 Nano 30B-A3B (MoE)
Nemotron 3 Nano 30B-A3B (MoE) is a mixture-of-experts (MoE) transformer language model from the Nvidia family, containing 30B parameters across 40 layers. It has 30B total parameters loaded into VRAM with 3B active per token. It supports up to 262K tokens of context with a hidden dimension of 2560 and 8 KV heads for efficient grouped-query attention (GQA). Nemotron Open Model License. MoE. Up to 1M context. Efficient local reasoning/agents.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 15.66Consumer GPU | 46.02Datacenter GPU | 55.51Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 31.17Datacenter GPU | 61.53Datacenter GPU | 71.01Datacenter GPU |
F162.00 B/W Reference | 62.18Datacenter GPU | 92.54Cluster / Multi-GPU | 102.0Cluster / Multi-GPU |
Other Nvidia Models
View AllFind the right GPU for Nemotron 3 Nano 30B-A3B (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.