AI Models
52 models · 0 new in 60d
- ▾Nomic Embed Text v2-MoEOpen
Nomic AI · 8K tokens · self-host
Best for: Self-hosted RAG, privacy-first search, zero-cost embeddings
How: Self-host for zero cost. Comparable quality to OpenAI embeddings.
Example: Run alongside pgvector on the same server — full RAG pipeline with zero API costs.
MoE embeddingmatryoshkaApache 2.0self-hostableHardware to self-hostVRAM: 2GB or CPU-onlyGPU: Any — runs on CPU at reasonable speedRAM: 4-8GB system RAMTiny MoE embedding model. CPU inference is fast enough for most use cases.
API: pip install nomic OR Ollama. Also hosted on Nomic Atlas.
- ▾text-embedding-3-large
OpenAI · 8K tokens · $0.13/M → N/A
Best for: RAG pipelines, semantic search, document retrieval
How: Set dimensions param to reduce size (e.g., 256 for fast search, 3072 for max quality).
Example: Index your internal docs and build a search API with pgvector + this model.
3072 dimensionsstrong retrievalmatryoshka supportAPI: api.openai.com — client.embeddings.create(model='text-embedding-3-large')