AI Models

1 result for "DeepSeek V3.2"

Compare →
  • DeepSeek V3.2Open

    DeepSeek · 164K tokens · self-host

    Best for: Long-context coding, upgraded V3 deployments

    How: Drop-in upgrade from V3. Uses Dynamic Sparse Attention for better long-context performance.

    Example: Feed your entire microservice codebase and get cross-service dependency analysis.

    HumanEval 94.0%
    codingmathsparse attention (DSA)MIT licenseimproved context
    Hardware to self-host
    VRAM: 350GB (quantized)
    GPU: 8× H100 80GB
    RAM: 512GB+ system RAM

    Same hardware footprint as V3 — 671B with sparse attention.

    API: api.deepseek.com OR self-host via vLLM. Same OpenAI-compatible API.