Stop guessing if your GPU can run that model.

Check VRAM fit, compare hardware, and read buying guides built for local AI workloads.

Hardware fit checks

The buying checklist is different for local AI

A GPU that looks good for gaming can still be the wrong local LLM purchase.

GPU VRAM capacity

Decides which model tiers fit your budget.

Memory bandwidth

Directly affects token generation speed.

Platform support

CUDA, ROCm, Metal — runner support matters.

Power budget

PSU headroom is critical for high-end GPUs.

Compare before you buy

The exact decisions people make before checkout

Not generic category pages — real GPU vs GPU comparisons that settle the buy.

View all guides

Model-first research

Start with the model, then choose the hardware.

The local LLM directory covers 94 models across 21 families. Understand model size and architecture before comparing GPU tiers.

Browse all 94 models