OpenClaw agents add: Complete Auth Setup for Anthropic, OpenAI & Ollama
The openclaw agents add command is where every OpenClaw setup begins. But auth configuration trips up most new users. This guide covers every auth scenario with exact CLI commands.
You installed OpenClaw. You read the docs. You ran openclaw agents add and hit an auth error. You are not alone. Model authentication is the single biggest friction point for new OpenClaw users, and the docs assume you already know which provider you want and how API keys work.
This guide walks through every auth scenario from scratch. Anthropic API keys for Claude models. OpenAI tokens for GPT. Ollama for fully local, zero-cost inference. Google Gemini for budget cloud models. By the end, you will have a working agent with verified authentication, regardless of which model provider you choose.
We will also cover multi-model setups (different models for different agents), cost optimization strategies, and every common error message with its fix. Whether you are running a single agent or a full multi-agent crew, auth configuration is the foundation that everything else depends on.
Basic Syntax: openclaw agents add
The openclaw agents add command creates a new agent in your OpenClaw instance. At its simplest, it takes an agent ID and walks you through an interactive setup wizard.
openclaw agents add <agent-id>
The agent ID is a lowercase, hyphenated string that uniquely identifies your agent. It becomes the directory name under ~/.openclaw/agents/ and the reference you use in all subsequent commands.
# Examples of valid agent IDs openclaw agents add content-writer openclaw agents add pm-orion openclaw agents add slack-support-bot
Required and Optional Flags
The command supports several flags to skip the interactive wizard and configure the agent directly from the command line:
# Full syntax with all flags openclaw agents add <agent-id> \ --model <model-name> \ # e.g., claude-opus-4-6, gpt-5.2, qwen3.5 --provider <provider> \ # anthropic, openai, ollama, google --soul <path-to-soul.md> \ # path to your SOUL.md file --channel <channel> \ # telegram, discord, slack, terminal --schedule <cron-expression> # optional: cron schedule for automated runs
If you omit the --model flag, OpenClaw defaults to the model configured in your global settings. If no model is configured at all, the command will fail with an auth error. This is where most people get stuck.
What Happens When You Run It
When you run openclaw agents add, OpenClaw does the following:
- 1. Creates the agent directory at
~/.openclaw/agents/<agent-id>/ - 2. Generates a default SOUL.md if you did not provide one with
--soul - 3. Validates model auth by sending a test ping to the configured model provider
- 4. Registers the agent in the local agent registry
- 5. Starts the agent if the gateway is running
Step 3 is where auth errors surface. If you have not configured your model provider credentials, the agent creation will fail at validation. The fix is to run openclaw models auth before adding your first agent.
Auth Setup: Anthropic (Claude Models)
Anthropic's Claude models are the most popular choice for OpenClaw agents. Claude Opus 4.6 excels at complex reasoning and planning, making it ideal for PM and strategy agents. Haiku is fast and cheap for routine automation tasks.
Step 1: Get Your API Key
Go to console.anthropic.com and sign in. Navigate to API Keys in the left sidebar. Click "Create Key" and give it a descriptive name like "openclaw-production". Copy the key immediately. Anthropic only shows it once.
Your key will look like: sk-ant-api03-xxxxxxxxxxxx. Keep it secure. Do not commit it to Git, do not paste it in public channels, and do not share it with anyone.
Step 2: Configure OpenClaw Auth
Run the auth command and paste your API key when prompted:
openclaw models auth paste-token # When prompted: # Provider: anthropic # Token: sk-ant-api03-xxxxxxxxxxxx
OpenClaw stores the token securely in ~/.openclaw/config/auth.json. The file is permission-locked to your user account.
Step 3: Verify
Confirm your auth is working by listing available models:
openclaw models list # Expected output: # anthropic/claude-opus-4-6 ✓ authenticated # anthropic/claude-sonnet-4 ✓ authenticated # anthropic/claude-haiku ✓ authenticated # anthropic/claude-4.5-sonnet ✓ authenticated
If you see green checkmarks next to the Anthropic models, auth is configured correctly. Now you can add an agent using any Claude model:
openclaw agents add pm-agent --model claude-opus-4-6 --provider anthropic
Supported Anthropic Models
- Claude 4.5 Sonnet — balanced performance and cost, good default choice
- Claude Opus 4.6 — highest capability, best for complex reasoning and planning agents
- Claude Haiku — fastest and cheapest, ideal for simple automation and high-volume tasks
- Claude Sonnet 4 — strong reasoning at moderate cost, great for coding agents
Auth Setup: OpenAI (GPT Models)
OpenAI's GPT models are another popular choice, especially GPT-5.2 for advanced tasks and GPT-4o for cost-effective general use. The auth process is similar to Anthropic.
Step 1: Get Your API Key
Go to platform.openai.com and navigate to API Keys. Click "Create new secret key", name it, and copy it. Like Anthropic, OpenAI only shows the key once.
Your key will look like: sk-proj-xxxxxxxxxxxx. Make sure your OpenAI account has billing enabled and sufficient credits. API key creation works without billing, but actual API calls will fail without a funded account.
Step 2: Configure OpenClaw Auth
openclaw models auth paste-token # When prompted: # Provider: openai # Token: sk-proj-xxxxxxxxxxxx
Step 3: Verify and Add Agent
openclaw models list # Expected output: # openai/gpt-5.2 ✓ authenticated # openai/gpt-4o ✓ authenticated # openai/o3 ✓ authenticated # Add an agent with GPT openclaw agents add writer-agent --model gpt-5.2 --provider openai
Supported OpenAI Models
- GPT-5.2 — latest flagship model, strongest performance across all tasks
- GPT-4o — fast and affordable multimodal model, good for general-purpose agents
- o3 — reasoning-focused model, excels at math, logic, and code generation
Auth Setup: Ollama (Local/Free)
Ollama lets you run open-source models entirely on your own hardware. No API keys. No usage fees. No data leaves your machine. This is the best option for privacy-conscious setups, development environments, and anyone who wants to run agents at zero marginal cost.
Step 1: Install Ollama
Download and install Ollama from ollama.com. It is available for macOS, Linux, and Windows.
# macOS / Linux curl -fsSL https://ollama.com/install.sh | sh # Verify installation ollama --version
Step 2: Pull a Model
Ollama hosts hundreds of open-source models. Pull the one you want to use. The download size varies from 2 GB (small models) to 40+ GB (large models).
# Pull popular models ollama pull qwen3.5 # Alibaba's Qwen, great for general tasks ollama pull llama3.3 # Meta's Llama, strong reasoning ollama pull mistral # Mistral AI, fast and efficient ollama pull deepseek-r1 # DeepSeek, excellent for coding # Verify the model is available ollama list
Step 3: Start the Ollama Server
Ollama runs a local API server on port 11434. Make sure it is running before you configure OpenClaw:
# Start Ollama server (runs in background) ollama serve # Test that it's running curl http://localhost:11434/api/tags
Step 4: Point OpenClaw to Ollama
Unlike cloud providers, Ollama does not need an API key. You just tell OpenClaw where the Ollama server is running:
openclaw models auth paste-token # When prompted: # Provider: ollama # Endpoint: http://localhost:11434 # Token: (leave empty, press Enter)
Verify and add your agent:
openclaw models list # Expected output: # ollama/qwen3.5 ✓ connected # ollama/llama3.3 ✓ connected # ollama/mistral ✓ connected # Add an agent with a local model openclaw agents add local-writer --model qwen3.5 --provider ollama
The key advantage here is cost. Once you have the hardware, every inference call is free. No metered API billing, no rate limits, no surprise invoices at the end of the month. For development and testing, Ollama is unbeatable.
Auth Setup: Google Gemini
Google Gemini offers competitive models at aggressive pricing. Gemini Flash is one of the cheapest cloud inference options available, making it a solid choice for high-volume agents where cost matters more than peak performance.
Step 1: Get Your API Key
Go to aistudio.google.com and sign in with your Google account. Click "Get API Key" in the left sidebar. Create a key for a new or existing Google Cloud project. Copy the key.
Your key will look like: AIzaSyXXXXXXXXXXXXXXXXXXXXX. Google offers a generous free tier for Gemini API, so you can test without billing enabled.
Step 2: Configure and Verify
openclaw models auth paste-token # When prompted: # Provider: google # Token: AIzaSyXXXXXXXXXXXXXXXXXXXXX openclaw models list # Expected output: # google/gemini-2.5-pro ✓ authenticated # google/gemini-2.5-flash ✓ authenticated # Add an agent with Gemini openclaw agents add research-agent --model gemini-2.5-pro --provider google
Supported Google Models
- Gemini 2.5 Pro — strong reasoning and long context, competitive with Claude and GPT
- Gemini 2.5 Flash — extremely fast and cheap, great for high-volume tasks
Multi-Model Setup: Different Models for Different Agents
One of OpenClaw's most powerful features is running multiple agents with different models simultaneously. This lets you optimize for both performance and cost across your agent crew.
Strategy: Match Model to Task
Not every agent needs the most expensive model. A PM agent making strategic decisions benefits from Claude Opus 4.6. A content writer that produces drafts works fine with GPT-4o. A monitoring agent that checks logs and sends alerts can run on Ollama locally for free.
# Configure auth for multiple providers openclaw models auth paste-token # Anthropic key openclaw models auth paste-token # OpenAI key openclaw models auth paste-token # Ollama endpoint # Create agents with different models openclaw agents add pm-orion \ --model claude-opus-4-6 \ --provider anthropic openclaw agents add writer-echo \ --model gpt-4o \ --provider openai openclaw agents add monitor-radar \ --model qwen3.5 \ --provider ollama
Cost Optimization Example
Here is a real cost comparison for a three-agent crew running 24/7 with moderate usage (approximately 100K tokens per agent per day):
| Agent | Model | Daily Cost | Monthly Cost |
|---|---|---|---|
| PM (Orion) | Claude Opus 4.6 | ~$3.00 | ~$90 |
| Writer (Echo) | GPT-4o | ~$0.50 | ~$15 |
| Monitor (Radar) | Ollama / Qwen 3.5 | $0 | $0 |
Total monthly cost: roughly $105. If you ran all three agents on Claude Opus 4.6, it would cost $270+. Using the right model for the right task cuts your bill by more than half without sacrificing quality where it matters.
Fallback Configuration
OpenClaw supports model fallbacks. If your primary model provider goes down or hits a rate limit, the agent automatically switches to a backup model. Configure this in the agent's SOUL.md:
# In SOUL.md model configuration
model:
primary: claude-opus-4-6
fallback:
- gpt-5.2
- qwen3.5The agent tries models in order. If Anthropic is down, it uses GPT-5.2. If OpenAI is also down, it falls back to local Ollama. Your agent stays online even when cloud providers have outages.
Common Errors and Fixes
These are the five most common errors you will see when running openclaw agents add, along with the exact fix for each one.
Error: "Auth not configured"
Error: Auth not configured for provider 'anthropic'. Run 'openclaw models auth paste-token' to set up authentication.
Cause: You tried to add an agent before configuring model auth. OpenClaw validates auth during agent creation.
Fix: Run openclaw models auth paste-token, select the provider, and paste your API key. Then retry openclaw agents add.
Error: "Model not found"
Error: Model 'claude-opus' not found. Available models: claude-opus-4-6, claude-sonnet-4, claude-haiku
Cause: You used the wrong model name. OpenClaw requires the exact model identifier, not a shortened alias.
Fix: Run openclaw models list to see exact model names. Use the full identifier like claude-opus-4-6 instead of claude-opus.
Error: "Connection refused"
Error: Connection refused at http://localhost:11434 Could not reach Ollama server.
Cause: You configured Ollama as your provider, but the Ollama server is not running.
Fix: Start the Ollama server with ollama serve. Verify it is running by hitting curl http://localhost:11434/api/tags. If you are running Ollama on a different machine, update the endpoint in openclaw models auth.
Error: "Rate limit exceeded"
Error: Rate limit exceeded. Too many requests to anthropic API. Retry after 60 seconds.
Cause: You are running too many concurrent agents on the same API key, or your API tier has a low rate limit.
Fix: Reduce the number of concurrent agents, upgrade your API tier with the provider, or spread agents across multiple providers. You can also configure rate limiting in the OpenClaw gateway to stagger requests.
Error: "Invalid API key"
Error: Invalid API key for provider 'openai'. The provided key was rejected by the API.
Cause: Your API key is expired, revoked, or incorrectly pasted. Extra whitespace or a truncated key will also trigger this error.
Fix: Generate a new API key from the provider's dashboard. Run openclaw models auth paste-token again with the fresh key. Make sure you copy the entire key without leading or trailing spaces.
Skip the Auth Hassle
If all of this feels like too much manual configuration, there is a faster way. CrewClaw handles all of this for you. Pick a role, choose your model provider, and download a ready-to-deploy agent package with auth, SOUL.md, Docker config, and gateway setup already configured.
$9 one-time. No subscription. No recurring fees. Your agent package includes pre-configured auth templates for every provider covered in this guide. Just paste your API key into the generated .env file and run docker compose up.
Frequently Asked Questions
Can I use different models for different agents?
Yes. Each agent in OpenClaw can use a different model provider. You configure auth per-model using openclaw models auth, and then assign models to agents in their SOUL.md file or during openclaw agents add setup. For example, your PM agent can use Claude Opus 4.6 while your content writer uses GPT-5.2 and your summarizer runs on local Ollama.
Is Ollama good enough for production agents?
It depends on the task. For simple automation, summarization, and internal tools, Ollama models like Qwen 3.5 and Llama perform well. For complex reasoning, multi-step planning, and customer-facing agents, cloud models like Claude or GPT still outperform local alternatives. The advantage of Ollama is zero API cost and full data privacy. Many teams use Ollama for development and testing, then switch to a cloud model for production.
How do I update an agent's auth after creation?
Run openclaw models auth paste-token again with the new key for the provider you want to update. OpenClaw stores auth tokens globally, so updating the token applies to all agents using that provider. If you want to change which model an agent uses, edit the model field in the agent's SOUL.md file and restart the agent with openclaw agent restart.
Can I use my own fine-tuned model?
Yes, if your fine-tuned model is hosted on a provider that exposes an OpenAI-compatible API. Set the base URL and API key using openclaw models auth, then specify your fine-tuned model name in the agent config. This works with fine-tuned models on OpenAI, Azure OpenAI, Fireworks, Together AI, and any provider with an OpenAI-compatible endpoint. For locally fine-tuned models, serve them through Ollama or vLLM and point OpenClaw to that endpoint.
What is the cheapest model setup for OpenClaw?
The cheapest setup is Ollama running locally. There are zero API costs because everything runs on your hardware. Install Ollama, pull a model like qwen3.5 or llama, and configure OpenClaw to use the local endpoint. You pay nothing beyond electricity. For cloud models, Anthropic Haiku and Google Gemini Flash are the cheapest options, typically under $0.50 per million tokens. CrewClaw generates your full agent config for $9 one-time, so you skip the manual setup entirely.
Deploy a Ready-Made AI Agent
Skip the setup. Pick a template and deploy in 60 seconds.