Models
Configure available LLM models and provider settings.
The Models section controls which LLM models are available in your workspace and how they're configured.
Provider configuration
Set up each LLM provider:
OpenRouter gives access to 100+ models from all major providers through a single API key.
API Key — set in Credentials
Base URL —
https://openrouter.ai/api/v1(default)Default Model — the model used when agents don't specify one
Popular models via OpenRouter:
anthropic/claude-sonnet-4-20250514google/gemini-2.5-prodeepseek/deepseek-chatopenai/gpt-4
Direct OpenAI API access.
API Key — set in Credentials
Organisation ID — optional, for org-scoped billing
Models:
gpt-4gpt-4-turbogpt-3.5-turbo
Direct Anthropic API access.
API Key — set in Credentials
Models:
claude-opus-4-20250514claude-sonnet-4-20250514claude-haiku-4-5-20251001
DeepSeek models for cost-effective tasks.
API Key — set in Credentials
Models:
deepseek-chatdeepseek-coder
Default model
The default model is used when:
An agent doesn't have a model explicitly set
New agents are created without specifying a model
System-level tasks (routing, classification) need an LLM
Set the default in General Settings.
Model fallback
Configure a fallback chain for resilience:
Primary — the agent's assigned model
Fallback 1 — used if primary is unavailable or rate-limited
Fallback 2 — last resort
Using OpenRouter as your primary provider gives you built-in fallback — OpenRouter routes to alternative models automatically if one is down.
Embedding model
The embedding model is used for document chunking and semantic search:
Default:
qwen/qwen3-embedding-8bvia OpenRouterDimensions: 2048
Configurable in Settings
Changing the embedding model requires re-processing all documents. Existing embeddings become incompatible with the new model.
Last updated

