AI Models

52 models · 7 new in 60d

Compare →
  • Gemma 4 31B DenseNewOpen

    Google · 256K tokens · self-host

    Best for: Self-hosted multimodal production, commercial use, multilingual apps

    How: Dense 31B — fits on a single A100 or 2x RTX 4090. Apache 2.0 = fully commercial. Supports images and video natively.

    Example: Deploy as a private multimodal assistant that reads screenshots, logs, and video clips.

    LMSYS Arena #3 textMMLU ~82%
    multimodalimages + video35+ languagesApache 2.0dense architecture
    Hardware to self-host
    VRAM: 20GB (quantized) / 62GB (FP16)
    GPU: 1× A100 80GB or 2× RTX 4090 24GB
    RAM: 32GB+ system RAM

    31B dense. Native multimodal (images + video) increases compute cost vs text-only.

    API: Ollama, vLLM, Hugging Face, Vertex AI. ollama run gemma4:31b

    Brand new (Apr 2026). Ranked #3 on LMSYS Arena text leaderboard at launch.

  • Gemma 4 31B It NVFP4 TurboNewOpen

    LilaRest · self-host

    Best for: Trending on HuggingFace (247 likes this week)

    How: Available on Hugging Face. 74K downloads.

    Example: from transformers import AutoModelForCausalLM; model = AutoModelForCausalLM.from_pretrained("LilaRest/gemma-4-31B-it-NVFP4-turbo")

    transformerssafetensorsgemma4text-generationgemma-4-31b-it

    API: huggingface.co/LilaRest/gemma-4-31B-it-NVFP4-turbo

    Auto-discovered from HuggingFace trending. 247 likes, 74K downloads.

  • Supergemma4 26b Uncensored Mlx 4bit V2NewOpen

    Jiunsong · self-host

    Best for: Trending on HuggingFace (171 likes this week)

    How: Available on Hugging Face. 12K downloads.

    Example: from transformers import AutoModelForCausalLM; model = AutoModelForCausalLM.from_pretrained("Jiunsong/supergemma4-26b-uncensored-mlx-4bit-v2")

    mlxsafetensorsgemma4uncensoredapple-silicon

    API: huggingface.co/Jiunsong/supergemma4-26b-uncensored-mlx-4bit-v2

    Auto-discovered from HuggingFace trending. 171 likes, 12K downloads.

  • Gemma 4 E4B It OBLITERATEDNewOpen

    OBLITERATUS · self-host

    Best for: Trending on HuggingFace (282 likes this week)

    How: Available on Hugging Face.

    Example: from transformers import AutoModelForCausalLM; model = AutoModelForCausalLM.from_pretrained("OBLITERATUS/gemma-4-E4B-it-OBLITERATED")

    safetensorsggufgemma4abliterateduncensored

    API: huggingface.co/OBLITERATUS/gemma-4-E4B-it-OBLITERATED

    Auto-discovered from HuggingFace trending. 282 likes, 7K downloads.

  • Supergemma4 26b Uncensored Gguf V2NewOpen

    Jiunsong · self-host

    Best for: Trending on HuggingFace (383 likes this week)

    How: Available on Hugging Face. 54K downloads.

    Example: from transformers import AutoModelForCausalLM; model = AutoModelForCausalLM.from_pretrained("Jiunsong/supergemma4-26b-uncensored-gguf-v2")

    ggufgemma4uncensoredfastllama.cpp

    API: huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2

    Auto-discovered from HuggingFace trending. 383 likes, 54K downloads.

  • GLM 5.1NewOpen

    zai-org · self-host

    Best for: Trending on HuggingFace (1385 likes this week)

    How: Available on Hugging Face. 100K downloads.

    Example: from transformers import AutoModelForCausalLM; model = AutoModelForCausalLM.from_pretrained("zai-org/GLM-5.1")

    transformerssafetensorsglm_moe_dsatext-generationconversational

    API: huggingface.co/zai-org/GLM-5.1

    Auto-discovered from HuggingFace trending. 1385 likes, 100K downloads.

  • MiniMax M2.7NewOpen

    MiniMaxAI · self-host

    Best for: Trending on HuggingFace (929 likes this week)

    How: Available on Hugging Face. 189K downloads.

    Example: from transformers import AutoModelForCausalLM; model = AutoModelForCausalLM.from_pretrained("MiniMaxAI/MiniMax-M2.7")

    transformerssafetensorsminimax_m2text-generationconversational

    API: huggingface.co/MiniMaxAI/MiniMax-M2.7

    Auto-discovered from HuggingFace trending. 929 likes, 189K downloads.

  • DeepSeek V3.2Open

    DeepSeek · 164K tokens · self-host

    Best for: Long-context coding, upgraded V3 deployments

    How: Drop-in upgrade from V3. Uses Dynamic Sparse Attention for better long-context performance.

    Example: Feed your entire microservice codebase and get cross-service dependency analysis.

    HumanEval 94.0%
    codingmathsparse attention (DSA)MIT licenseimproved context
    Hardware to self-host
    VRAM: 350GB (quantized)
    GPU: 8× H100 80GB
    RAM: 512GB+ system RAM

    Same hardware footprint as V3 — 671B with sparse attention.

    API: api.deepseek.com OR self-host via vLLM. Same OpenAI-compatible API.

  • Mistral Large 3Open

    Mistral · 256K tokens · self-host

    Best for: European deployments, agent workflows, long-context multilingual apps

    How: Major upgrade from Large 2. MoE architecture with 41B active params. Same API, just change model ID.

    Example: Build a multi-tool agent that queries DBs, calls APIs, and generates reports in 30+ languages.

    MoE 41B active / 675B totalmultilingualfunction calling256K context
    Hardware to self-host
    VRAM: 350GB (quantized)
    GPU: 8× H100 80GB
    RAM: 512GB+ system RAM

    675B MoE (41B active). Datacenter class — most users go via api.mistral.ai.

    API: api.mistral.ai OR self-host via vLLM. OpenAI-compatible.

  • Llama 4 MaverickOpen

    Meta · 1M tokens · self-host

    Best for: Self-hosted production deployments, privacy-sensitive workloads

    How: ollama run llama4-maverick OR deploy on vLLM with tensor parallelism. Also available hosted on Together/Groq.

    Example: Deploy on 2x A100 GPUs behind your API gateway for private code review.

    MMLU 88.4%HumanEval 84.8%
    multilingualmultimodalMoE architecture17B active / 400B total
    Hardware to self-host
    VRAM: 200GB (quantized)
    GPU: 2× H100 80GB or 4× A100 80GB
    RAM: 256GB system RAM

    400B total params (17B active). FP16 needs ~800GB, FP8 ~400GB, INT4 ~200GB.

    API: Self-host via vLLM, Ollama, or use via Together, Fireworks, Groq

  • Llama 4 ScoutOpen

    Meta · 10M tokens · self-host

    Best for: Processing entire codebases, very long documents, single-GPU deployments

    How: Fits on a single H100. Best open model for extreme context lengths.

    Example: Feed your entire monorepo into context and ask about cross-service dependencies.

    MMLU 86.2%
    longest context (10M)MoE 17B active / 109B totalfits single H100
    Hardware to self-host
    VRAM: 80GB
    GPU: 1× H100 80GB
    RAM: 128GB system RAM

    17B active params, fits in a single H100 at FP8.

    API: Same as Maverick — vLLM, Ollama, Together, Fireworks

  • Qwen 3 235BOpen

    Alibaba · 128K tokens · self-host

    Best for: Flexible thinking control, commercial self-hosting, multilingual

    How: Supports /think and /no_think tags to toggle reasoning on/off per request. Apache 2.0 = fully commercial.

    Example: Use /no_think for fast classification, /think for complex debugging — same model.

    AIME 2024 85.7%HumanEval 90.2%
    hybrid thinkingMoE 22B activeApache 2.0multilingual
    Hardware to self-host
    VRAM: 140GB (quantized)
    GPU: 4× A100 80GB or 2× H100
    RAM: 256GB+ system RAM

    235B total (22B active). MoE architecture — only 22B params active per forward pass.

    API: Self-host via vLLM/SGLang or use via Together, Fireworks. Also on Alibaba Cloud.

  • Llama 3.3 70BOpen

    Meta · 128K tokens · self-host

    Best for: Proven workhorse for self-hosted deployments, fine-tuning base

    How: ollama run llama3.3:70b. For production: vLLM on 2x A100 or 4x A10G.

    Example: Fine-tune on your internal docs for a private knowledge base chatbot.

    MMLU 86.0%HumanEval 88.4%
    mature ecosystemfine-tuning friendlywide hardware support
    Hardware to self-host
    VRAM: 40GB (4-bit) / 140GB (FP16)
    GPU: 2× A100 80GB or 4× A10G 24GB
    RAM: 64GB+ system RAM

    70B dense. Widely supported — runs on Ollama with quantization on 48GB VRAM.

    API: Ollama, vLLM, TGI, or hosted (Together $0.60/M, Groq, Fireworks)

  • DeepSeek V3Open

    DeepSeek · 128K tokens · self-host

    Best for: Cost-sensitive production APIs, coding tasks, math-heavy pipelines

    How: Cheapest top-tier API. OpenAI-compatible. Self-host needs 8x A100.

    Example: Replace GPT-4 in your CI pipeline for automated code review at 1/10th the cost.

    HumanEval 92.1%MMLU 88.5%
    codingmathMoE 37B active / 671B totalMIT license
    Hardware to self-host
    VRAM: 350GB (quantized) / 1.3TB (FP16)
    GPU: 8× H100 80GB or 8× A100 80GB
    RAM: 512GB+ system RAM

    671B total (37B active). Most users rent via API — self-hosting needs datacenter hardware.

    API: api.deepseek.com ($0.27/M in, $1.10/M out) OR self-host