Local LLM VRAM Calculator
Calculate if your GPU can run Large Language Models
Recommended: RTX 3090 / 4090
24GB VRAM is the golden standard. Upgrade your local GPU, or rent a cloud instance right now.
đ§ Expert Hardware Analysis
As a dense model with 22.2 billion parameters, Mistral Small 22B has a strict memory footprint. To load the weights and maintain an 8,192 token context window, it requires 16.98 GB of VRAM. The RTX 4070 / 3060 (12GB) is limited to 12 GB. You will face a deficit of 5.0 GB, forcing the system to offload to your CPU/RAM. Expect a severe degradation in inference speed.
Hardware Guide & FAQ
16GB VRAM is the recommended entry point for 2026. It allows you to run:
- Llama 4 Scout (17B): Comfortably at Q4 quantization.
- DeepSeek R1 Distill (14B): Runs perfectly with long context.
- Mistral Small (22B): Requires efficient quantization (Q4) to fit.
However, for massive models like Llama 3.3 70B, 16GB (or even 24GB) is not enough. For those, you need 48GB+ VRAM (like RTX 6000 Ada) or a Dual-GPU setup (2x RTX 3090/4090).
- GGUF (Best for Beginners): The most flexible format. Works on CPU & GPU (perfect for offloading). Use this if you are unsure.
- EXL2 (Fastest): Exclusive to NVIDIA GPUs. It's incredibly fast but requires the model to fit 100% in VRAM. It crashes if you run out of memory.
- AWQ: An older GPU-focused format. Good for accuracy but less flexible than GGUF.
GPU memory (VRAM) is ultra-fast (1,000+ GB/s). System RAM is much slower (~60 GB/s).
If your model is 16GB and your GPU has 12GB, the remaining 4GB spills to your slow RAM, acting like a bottleneck.
The Fix: Use this calculator to find a smaller quantization (e.g., Q4_K_M) or reduce the Context Window until the entire model fits in VRAM (Green Result).
With 32GB VRAM, the RTX 5090 can easily run every model up to 34B parameters (Gemma 2, Qwen 2.5) at full speed.
However, running full 70B models (like Llama 3.3 70B) requires **46GB VRAM** and will always result in significant **CPU Offloading** on the 5090. If you need full speed 70B, you must use a 48GB Pro card or a Dual-GPU setup.
- 8k (Standard): Enough for most chats. Low VRAM usage.
- 32k (Long Docs): Great for summarizing PDFs. Uses ~2-4GB extra VRAM.
- 128k (Extreme): Uses massive amounts of VRAM (KV Cache).
Tip: If you are out of memory, lowering context from 32k to 8k is often the easiest way to fix it without changing the model intelligence.