AI Hardware
26 GPUs & accelerators · self-host price guide · cloud rates
5 hardware items · Edge / On-device
- Raspberry Pi 5 (8GB)Budget king
Other · 2023 · 17 GB/s
8 GBVRAM▾$80Fits: 3B models at 1-3 tok/s (Phi-3 mini, Gemma 4 E4B)
Price (new): $80Memory BW: 17 GB/sCPU inference only via llama.cpp. Fine for tiny models + learning.
- NVIDIA Jetson Orin Nano (8GB)
NVIDIA · 2023 · 20 TFLOPS · 68 GB/s
8 GBVRAM▾$249Fits: 3-7B quantized (Gemma 4 E4B, Phi-4 int4)
Price (new): $249FP16 compute: 20 TFLOPSMemory BW: 68 GB/sPower: 15W TDPBest SBC option. CUDA support — same code as desktop/server.
- Apple M4 (MacBook Air)
Apple · 2025 · 120 GB/s
16 GBLPDDR5X (unified)▾$1,099Fits: 7B-14B quantized models at good speed
Price (new): $1,099Memory BW: 120 GB/sUnified memory is great for LLMs. Use llama.cpp Metal backend.
- Apple M4 Pro (64GB)Sweet spot
Apple · 2025 · 273 GB/s
64 GBLPDDR5X (unified)▾$2,499Fits: Up to ~45B quantized (Qwen 2.5 Coder 32B, Gemma 3 27B)
Price (new): $2,499Memory BW: 273 GB/sBest dev laptop for local AI. Silent + no cooling issues.
- Apple M3 Ultra (192GB)
Apple · 2025 · 800 GB/s
192 GBLPDDR5 (unified)▾$5,999Fits: Up to 235B MoE (Qwen 3 235B) or 70B dense models
Price (new): $5,999Memory BW: 800 GB/sMac Studio. Runs models that require multi-GPU on NVIDIA, on a single box.