AI Models
1 result for "DeepSeek V3.2"
- ▾DeepSeek V3.2Open
DeepSeek · 164K tokens · self-host
Best for: Long-context coding, upgraded V3 deployments
How: Drop-in upgrade from V3. Uses Dynamic Sparse Attention for better long-context performance.
Example: Feed your entire microservice codebase and get cross-service dependency analysis.
HumanEval 94.0%codingmathsparse attention (DSA)MIT licenseimproved contextHardware to self-hostVRAM: 350GB (quantized)GPU: 8× H100 80GBRAM: 512GB+ system RAMSame hardware footprint as V3 — 671B with sparse attention.
API: api.deepseek.com OR self-host via vLLM. Same OpenAI-compatible API.