PCPartGuide
Buy hardware by what it needs to run, not the category it sits in.
Decide whether a GPU, CPU, RAM kit, SSD, or PSU actually fits your workload. No fake scores — specs, fit, and tradeoffs.
Where to start
Pick your entry point
Four paths, depending on where you are in the buying process.
Best GPU for local LLMs
Start here when you know you want a GPU for Ollama, llama.cpp, or LM Studio.
Continue ReferenceVRAM requirements by model
Check if your target model fits at Q4, Q5, Q8, or with longer context windows.
Continue ToolVRAM calculator
Model size, quantization, context length, GPU headroom — run the numbers first.
Continue DirectoryLocal LLM model directory
Compare 94+ model families by parameters, architecture, and estimated memory needs.
ContinueBuying guides
GPU comparisons before you checkout
These answer the questions that decide whether you buy a used RTX 3090, a 24GB card, AMD, or NVIDIA.
Best GPU for Local LLMs: The One We Keep Recommending (And the 3 We Don't)
The definitive guide to picking the right GPU for running local LLMs. Compare VRAM tiers, memory bandwidth, software ecosystem support, and power requirements across the RTX 5090, RTX 5080, RX 7900 XTX, and used options.
Best 24 GB GPU for Local LLMs: The 3 Cards That Actually Matter
The best 24 GB GPUs for local LLMs compared: used RTX 4090, RX 7900 XTX, and used RTX 3090. Head-to-head comparison with bandwidth, pricing, and model compatibility.
Best GPU for Local LLMs Under $500: Building a Rig That Actually Works
The best GPU for local LLMs under $500 is the used RTX 3090 with 24 GB VRAM. No new card at this price matches its VRAM capacity. Alternatives and what to avoid.
Best GPU for Local LLMs Under $800: Why Buying New Instead of Used Costs You 8 GB of VRAM
Under $800, the used RTX 3090 (24 GB) battles the new RTX 4070 Ti Super (16 GB). The RX 7900 XTX used also fits. Which is right for your LLM workload?
Best GPU for Local LLMs Under $1,500: The Decision That Determines Your Model Limits
Under $1,500, the used RTX 4090 (24 GB) and new RTX 5080 (16 GB GDDR7) compete. The RX 7900 XTX offers new 24 GB at a lower price. Which should you buy?
Hardware fit checks
The buying checklist is different for local AI
A GPU that looks good for gaming can still be the wrong local LLM purchase.
GPU VRAM capacity
Decides which model tiers fit your budget.
Memory bandwidth
Directly affects token generation speed.
Platform support
CUDA, ROCm, Metal — runner support matters.
Power budget
PSU headroom is critical for high-end GPUs.
Compare before you buy
The exact decisions people make before checkout
Not generic category pages — real GPU vs GPU comparisons that settle the buy.
Model-first research
Start with the model, then choose the hardware.
The local LLM directory covers 94 models across 21 families. Understand model size and architecture before comparing GPU tiers.
Browse all 94 models