Explore Text Models
Discover AI language models for conversations, coding, and creative writing
Gemini 3 Flash (Preview)
google/gemini-3-flash-preview
Google's Gemini 3 Flash preview model optimized for speed while maintaining high capability. Features sub-second response times with strong multimodal understanding and reasoning.
Gemini 3 Flash Thinking
google/gemini-3-flash-preview-thinking
Google's Gemini 3 Flash preview model with thinking mode enabled for enhanced reasoning and chain-of-thought capabilities.
Mistral Small Creative
mistralai/mistral-small-creative
Mistral Small Creative is an experimental model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversational agents.
Nvidia Nemotron 3 Nano 30B
nvidia/nemotron-3-nano-30b-a3b
Nvidia's latest Nemotron 3 Nano model with 30B total parameters (3B active) using hybrid Mamba-Transformer MoE architecture. Features excellent throughput and strong reasoning capabilities.
RNJ-1 Instruct 8B
essentialai/rnj-1-instruct
Essential AI's RNJ-1 Instruct 8B model. A capable instruction-following model optimized for general chat and task completion.
GLM 4.6 Derestricted v4
GLM-4.6-Derestricted-v4
Derestricted GLM 4.6 tuned for open-ended creative writing and roleplay with relaxed filters. Based on Z-AI's latest flagship foundation model.
GLM 4.5 Air Derestricted Iceblink ReExtract
GLM-4.5-Air-Derestricted-Iceblink-ReExtract
ReExtract variant of the Iceblink LoRA for GLM 4.5 Air with refined extractions for creative writing.
GLM 4.5 Air Derestricted Iceblink v2 ReExtract
GLM-4.5-Air-Derestricted-Iceblink-v2-ReExtract
ReExtract variant of the v2 Iceblink LoRA for GLM 4.5 Air with enhanced creative extractions.
GLM 4.5 Air Derestricted Steam ReExtract
GLM-4.5-Air-Derestricted-Steam-ReExtract
ReExtract variant of the Steam LoRA for GLM 4.5 Air with refined uncensored extractions for creative RP.
GPT 5.2 Chat
gpt-5.2-chat-latest
GPT-5.2 Chat is the latest chat-tuned variant from OpenAI's GPT-5 series. Builds on GPT-5.1 with improved multi-turn dialogue, better instruction following, and enhanced multi-modal comprehension.
GPT 5.2
gpt-5.2
GPT-5.2 is OpenAI's latest flagship model, building on GPT-5.1 with improved reasoning, tool integration, and long-context performance. Delivers enterprise-grade reliability with predictable latency.
GPT 5.2 Pro
gpt-5.2-pro
The highest performing version of GPT 5.2 by OpenAI. Optimized for complex reasoning, deep analysis, and demanding enterprise workloads requiring maximum capability.
Mixtral 8x7B
mistralai/mixtral-8x7b-instruct-v0.1
Mixtral 8x7B is a high-quality sparse Mixture of Experts (MoE) model with 45B total parameters but only 13B active per token. Excels at text generation, summarization, question answering, and code generation. Supports English, French, German, Spanish, and Italian. Apache 2.0 licensed.
Mixtral 8x22B
mistralai/mixtral-8x22b-instruct-v0.1
Mixtral 8x22B is a powerful sparse Mixture of Experts (MoE) model with 141B total parameters and 39B active per token. Features a 64K context window, exceptional math performance, and cost-efficient inference. Supports English, French, German, Spanish, and Italian. Apache 2.0 licensed.
GLM 4.6 Original
zai-org/glm-4.6-original
GLM-4.6, Zhipu's flagship text model with 256K context window and advanced reasoning capabilities. Direct via Z-AI (Zhipu).
GLM 4.6V
zai-org/glm-4.6v
GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales. Integrates native Function Calling capabilities, bridging 'visual perception' and 'executable action' for multimodal agents. Quantized at FP8.
DeepSeek V3.1 Nex N1
nex-agi/deepseek-v3.1-nex-n1
Nex-AGI's flagship 671B agentic model built on DeepSeek V3.1. Optimized for programming, tool use, web search, multi-hop reasoning, and mini-app development. Industry-leading results on SWE-bench Verified (70.6%), τ2-Bench (80.2%), and BFCL v4 (65.3%). Full production-ready agent capabilities with 128K context.
Devstral 2 123B
mistralai/devstral-2-123b-instruct-2512
Devstral 2 123B is a 123 billion parameter model from Mistral AI optimized for coding and development tasks. Features advanced reasoning capabilities for software engineering workflows.
GLM 4.6V Flash
zai-org/glm-4.6v-flash-original
GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications. Scales context window to 128k tokens and achieves SoTA performance in visual understanding among similar-scale models.
GLM 4.6V Original
zai-org/glm-4.6v-original
GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales. Integrates native Function Calling capabilities, bridging 'visual perception' and 'executable action' for multimodal agents. Direct via Z-AI (Zhipu).
Omega Directive 24B Unslop v2.0
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0
ReadyArt's MS3.2 Omega Directive 24B unslopped mix tuned for rich roleplay and storytelling.
GPT 5.1 Codex Max
gpt-5.1-codex-max
Tuned for deep refactors, large repos, and longer tool chains while keeping the full 5.1 reasoning upgrades.
Ministral 3B
mistralai/ministral-3b-2512
Ministral 3B is a tiny, efficient 3B parameter model from Mistral AI with vision capabilities, designed for edge deployment.
Ministral 8B
mistralai/ministral-8b-2512
Ministral 8B is an efficient 8B parameter model from Mistral AI with vision capabilities, designed for edge deployment.
Ministral 14B
mistralai/ministral-14b-2512
Ministral 14B is a powerful 14B parameter model from Mistral AI with vision capabilities, offering frontier performance in a compact size.
DeepSeek Math V2
deepseek-math-v2
DeepSeek's 685B parameter mathematical reasoning model with self-verification capabilities. Achieves gold-level scores on IMO 2025 and CMO 2024, plus 118/120 on Putnam 2024. Built on DeepSeek-V3.2-Exp-Base with generator-verifier architecture for rigorous theorem proving.
Ministral 3 14B
mistralai/ministral-14b-instruct-2512
Ministral 3 14B is a balanced model in the Ministral 3 family, designed for edge deployment. A powerful, efficient language model with vision capabilities, fine-tuned for instruction tasks. Features multilingual support, strong system prompt adherence, and native function calling. Apache 2.0 licensed.
Mistral Large 3 675B
mistralai/mistral-large-3-675b-instruct-2512
Mistral Large 3 675B is Mistral AI's flagship language model featuring advanced rope scaling and Eagle speculative decoding. Delivers exceptional performance across reasoning, coding, and multilingual tasks.
DeepSeek V3.2 Speciale
deepseek/deepseek-v3.2-speciale
DeepSeek V3.2 Speciale — maxed-out reasoning capabilities rivaling Gemini-3.0-Pro. Gold-medal performance on IMO, CMO, and ICPC World Finals. 128K max output with DeepSeek Sparse Attention. FP8.
DeepSeek V3.2 Original
deepseek-v3.2-original
DeepSeek V3.2 (non-thinking mode) — official successor to V3.2-Exp. Reasoning-first model built for agents with GPT-5 level performance. Balanced inference vs. output length for everyday use. First DeepSeek model with thinking-in-tool-use capability. ⚠️ Routed directly via DeepSeek API—use our other DeepSeek options if data privacy is a concern.
