Multi-AgentOpenClawMarch 5, 2026·11 min read

OpenClaw Multi-Agent Setup: Build Agent Teams That Work Together

A single OpenClaw agent handles tasks well. A coordinated team of agents handles entire workflows — research, writing, review, and distribution — without you touching a keyboard. This guide walks through every layer of OpenClaw multi-agent setup: agents.md structure, handoff rules, shared memory, real pipeline examples, and what to avoid.

Why Multi-Agent? When a Single Agent Is Not Enough

A single agent works well when the task is contained: summarize this document, write this email, answer this question. The moment a workflow has multiple steps with different skill requirements, the single-agent model breaks down.

Consider a blog post pipeline. Step one is keyword research — requires data analysis and SERP understanding. Step two is writing — requires long-form prose and structure. Step three is SEO review — requires technical knowledge of meta tags and search intent. Step four is social distribution — requires short-form copywriting for different platforms. Asking one agent to do all four well means it does all four poorly.

Specialization

Each agent masters one role instead of spreading thin across everything

Parallel Execution

Multiple agents work simultaneously — research, writing, and review overlap

Built-in QA

Reviewer agents catch errors before output leaves the pipeline

Multi-agent setups are worth the configuration overhead when you have a workflow that repeats, has three or more distinct steps, or requires different reasoning styles at different stages. If you only run a task once per month, a single agent with a detailed prompt is probably enough.

The agents.md File: Your Team's Org Chart

Every multi-agent OpenClaw workspace needs an agents.md file in the root directory. This file is the team-level configuration — it tells the gateway which agents are active, what their relationships are, and what rules govern the whole team. Each individual agent reads its own SOUL.md for personal instructions, but all agents read agents.md for team context.

agents.md — team root configuration
# Team: Content Operations

## Mission
Produce three SEO-optimized blog posts per week.
Each post must have keyword research, a written draft,
an SEO review pass, and social media distribution posts.

## Active Agents
- @Orion  — PM and coordinator. Owns the weekly plan.
- @Scout  — Researcher. Keyword analysis, competitor research, data gathering.
- @Echo   — Writer. Long-form blog posts, email copy.
- @Radar  — SEO Analyst. Optimization, meta descriptions, internal links.
- @Pulse  — Social Media. Twitter threads, LinkedIn posts, newsletter blurbs.

## Task Flow
Orion → Scout (research brief) → Echo (draft) → Radar (SEO) → Pulse (social)

## Team Rules
- All responses in English only
- @Orion approves all final content before publish
- No agent publishes without @Radar SEO sign-off
- All research citations must be included in drafts

## Shared Context
- Target audience: developers and technical founders
- Brand voice: direct, no fluff, practical examples
- Publish platform: company blog (Next.js + MDX)

The agents.md file is injected into every agent's context by the gateway. This means @Radar knows that @Echo is on the team and what @Echo's role is, even if they have never exchanged messages. It eliminates the need to re-explain team structure in every prompt.

Step-by-Step: Building a 3-Agent Team (PM, Writer, Reviewer)

The simplest useful multi-agent setup is three agents: a coordinator, a producer, and a reviewer. Here is how to build it from scratch.

Step 1 — Create the agents

Terminal
openclaw agents add orion    # PM / Coordinator
openclaw agents add echo     # Content Writer
openclaw agents add lens     # Content Reviewer

openclaw agents list
# orion   active
# echo    active
# lens    active

Step 2 — Write agents.md

agents.md
# 3-Agent Writing Team

## Agents
- @Orion — PM. Receives task requests. Assigns topics to @Echo.
- @Echo  — Writer. Writes first drafts. Sends to @Lens for review.
- @Lens  — Reviewer. Reviews drafts for clarity, accuracy, tone.
           Returns feedback to @Echo or approves for publish.

## Flow
Orion (assign) → Echo (write) → Lens (review) → Echo (revise) → Done

## Rules
- @Echo must not publish without @Lens approval
- @Lens must provide specific, actionable feedback (not just "good" or "bad")
- @Orion tracks status: assigned / in-progress / in-review / approved

Step 3 — Define handoffs in each SOUL.md

agents/echo/SOUL.md — Writer handoffs
# Echo

## Role
Content writer. You write blog posts, guides, and landing page copy.

## Handoffs
- WHEN @Orion assigns a topic: acknowledge it, write a 1,500-word draft
- AFTER draft is complete: send full draft to @Lens with message:
  "@Lens please review this draft: [paste draft]"
- WHEN @Lens returns feedback: revise and re-send for final approval
- WHEN @Lens approves: notify @Orion the post is ready to publish

Step 4 — Start the gateway and trigger the workflow

Terminal
openclaw gateway start

openclaw agent --agent orion --message   "Assign Echo a blog post: 'How to deploy an AI agent on Raspberry Pi'"

# Orion sends task to Echo
# Echo writes draft, sends to Lens
# Lens reviews, returns feedback or approves
# Echo revises if needed, notifies Orion when approved

Agent Communication: @Mention Handoffs and Shared Memory

OpenClaw agents communicate through two mechanisms: @mention handoffs for direct message passing, and shared MEMORY.md files for persistent context.

When an agent includes @AgentName in its response, the gateway routes that message to the named agent. The receiving agent gets the full message content plus the team context from agents.md. This is how pipelines move work forward — each agent completes its step and hands off to the next.

Example — Echo handing off to Radar
# Echo's response after completing draft:

Draft complete. Word count: 1,847. Topic: Raspberry Pi agent deployment.

Sections covered:
- Hardware requirements (Pi 4, 8GB RAM)
- OpenClaw installation steps
- SOUL.md configuration
- Gateway setup and autostart
- Telegram integration

@Radar please review this draft for SEO. Target keyword:
"openclaw raspberry pi". Check: meta description, header
structure, internal link opportunities, and keyword density.

[Full draft pasted below]

For persistent shared context — facts that need to survive across multiple sessions — agents write to a shared MEMORY.md file. For example, a researcher agent stores competitor findings in MEMORY.md and a writer agent reads them when starting a new post, without needing to re-run the research.

shared MEMORY.md — written by Scout, read by Echo
# Shared Team Memory

## Competitor Analysis (Updated: 2026-03-05)
- crewai.com: targets "multi-agent framework", DR 68
- autogpt.net: targets "autonomous agent", DR 54
- langchain.com: targets "llm orchestration", DR 72

## High-Value Keyword Opportunities
- "openclaw multi agent setup" — low competition, 880 monthly
- "openclaw agents.md" — near zero competition, high intent
- "run multiple ai agents" — medium competition, 2.4K monthly

## Brand Voice Notes
- Never say "leverage" or "unlock"
- Use real numbers over vague claims
- Lead with the problem, not the solution

Real Example: SEO Content Pipeline (Researcher → Writer → Editor)

This is the exact pipeline used to produce SEO blog content for CrewClaw. Three agents, one clear flow, consistent weekly output with no manual intervention on routine posts.

@Scout (Researcher)

Input: Topic + target keyword

Output: Research brief: SERP analysis, top 5 competitor angles, data points, internal link candidates

Hands to: @Echo

@Echo (Writer)

Input: Research brief from Scout

Output: 1,800-word draft: intro, 6 sections, code examples, CTA

Hands to: @Radar

@Radar (SEO Editor)

Input: Draft from Echo

Output: Optimized draft: revised H2s, meta description, internal links, keyword density check

Hands to: @Orion (final approval)

Terminal — trigger the pipeline
openclaw agent --agent orion --message   "New blog post: target keyword 'openclaw multi-agent setup'.
   Assign Scout to research, then Echo to write, then Radar to edit."

# Orion assigns Scout → Scout researches → @Echo handoff
# Echo writes → @Radar handoff → Radar edits → @Orion approval
# Total time: ~12 minutes for a complete 1,800-word SEO post

Real Example: Customer Support Escalation (Triage → Support → Escalation)

Multi-agent orchestration is not just for content. Customer support is one of the highest-value use cases — a tiered team handles volume automatically and only escalates genuinely complex cases.

agents.md — support team
# Support Team

## Agents
- @Triage   — Classifies incoming tickets: billing / technical / account / general
- @Support  — Handles billing and general questions. Uses knowledge base.
- @Escalate — Handles complex technical issues. Has full system access context.

## Flow
All tickets → Triage → Support (billing/general) or Escalate (technical/complex)

## Escalation Triggers
@Support escalates to @Escalate when:
- Issue involves data loss or security
- Customer reports the same problem 3+ times
- Resolution requires system changes
- Customer explicitly requests senior support

## Response SLA
- Triage: classify within 60 seconds
- Support: respond within 5 minutes
- Escalate: respond within 15 minutes

This pattern handles 80%+ of common support volume through @Support and only surfaces genuinely difficult cases to @Escalate. The triage agent costs almost nothing to run (fast model, short prompts) and prevents misrouting from the start.

Shared Memory and Context Passing Between Agents

The biggest challenge in multi-agent workflows is context loss. An agent receives a handoff message but does not have the background that the previous agent built up over several turns. Two mechanisms address this: inline context in handoff messages, and persistent MEMORY.md files.

For inline context, the rule is: never send a short handoff. When Echo passes a draft to Radar, it does not just say "review this." It sends the full draft, the target keyword, the search intent, and any constraints. The receiving agent needs everything it would need to start cold.

For persistent context, agents write structured data to a shared MEMORY.md in the workspace root. Any agent can read this file. Define a clear format — sections by topic, timestamps on updates — so agents know which data is current and which is stale.

Handoff message — full context, not just a note
@Radar review request:

TARGET KEYWORD: "openclaw multi-agent setup" (880/mo, low competition)
SEARCH INTENT: Tutorial / how-to. Searchers want step-by-step configuration.
TOP SERP: openclaw.com/docs/teams, reddit threads, one Medium post.
WORD COUNT: 1,847 words
DRAFT STATUS: First pass. No SEO optimization done yet.

SEO TASKS FOR YOU:
1. Rewrite H2s to include target keyword naturally
2. Write a meta description (150-160 chars)
3. Add 3 internal links to related CrewClaw blog posts
4. Check keyword density — target 1-1.5%
5. Flag any sections that bury the keyword too deep

[Full draft follows]

Monitoring Multi-Agent Workflows

A multi-agent pipeline that runs unmonitored will eventually stall. An agent gets a malformed handoff, enters a loop, or produces output the next agent cannot parse. Monitoring is not optional for production workflows.

Terminal — monitoring commands
# Check gateway status and active agents
openclaw gateway status

# View recent message activity across all agents
openclaw gateway logs --tail 50

# Check a specific agent's last session
openclaw agent --agent echo --message "What is your current task status?"

# View session file directly
cat ~/.openclaw/agents/echo/sessions/sessions.json | tail -20

For always-on pipelines, add a monitoring agent (a simple Haiku agent with a HEARTBEAT.md trigger) that pings each agent every hour and reports status to Telegram. This gives you mobile visibility without watching logs manually.

Common Pitfalls in Multi-Agent Setups

Circular handoffs

Problem: Agent A waits for Agent B, who is waiting for Agent A. The pipeline stalls silently.

Fix: Define a clear directional flow in agents.md. Every task has one owner at a time. No agent should send work back to an upstream agent without explicit rules for when that is allowed.

Context loss on handoff

Problem: Agent receives '@Review this' with no background. It has to guess what 'this' is or request clarification, breaking the pipeline.

Fix: Mandate full context in every handoff message. Include: what was done, what needs to happen next, any constraints, and the full artifact (draft, data, etc.). Treat every handoff like a cold start.

Model mismatch

Problem: Using Claude Sonnet for a triage agent that just classifies tickets into 4 categories. Expensive, slow, overkill.

Fix: Use fast/cheap models (Claude Haiku, GPT-4o mini) for coordination, routing, and classification. Reserve expensive models for writing, reasoning, and analysis tasks that actually need capability.

Too many agents too soon

Problem: Starting with 8 agents before the 3-agent pipeline is stable. Debugging a broken 8-agent system is a nightmare.

Fix: Build the minimum viable pipeline first (3 agents). Run it until it is reliable. Add agents one at a time and test each addition before adding the next.

Performance Tips for Multi-Agent Setups

Multi-agent teams can get expensive fast if you are not deliberate about model selection and context size. These five practices keep costs low without sacrificing output quality.

1

Assign models by task complexity

Haiku for routing, classification, and status updates. Sonnet or GPT-4o for writing and multi-step reasoning. Never use Opus unless the task genuinely requires it.

2

Trim context before handoffs

Do not forward an entire 10-turn conversation to the next agent. Summarize: what was decided, what is the artifact, what needs to happen next. Shorter context = faster, cheaper responses.

3

Use MEMORY.md for persistent facts

Store stable facts (brand voice, competitor data, target keywords) in MEMORY.md instead of repeating them in every message. Agents read the file once rather than you pasting it into every prompt.

4

Run coordination agents on Haiku

Your PM agent (@Orion) mostly delegates and tracks status. This is exactly the kind of work Haiku handles well. Saving $0.003 per message adds up to real money across thousands of pipeline runs.

5

Batch parallel work where possible

If your pipeline has independent branches — for example, one agent writes while another does keyword research — start both in parallel. Do not serialize tasks that have no dependency on each other.

Related Guides

Frequently Asked Questions

What is the difference between openclaw-agent-teams-guide and openclaw multi-agent setup?

The openclaw-agent-teams-guide covers the basics of creating an agent team — adding agents and wiring them up. OpenClaw multi-agent setup goes deeper: orchestration patterns, agents.md structure, shared memory, context passing between agents, and production monitoring. This guide is for teams that already have agents running and want to coordinate them into a reliable pipeline.

What is agents.md and why do I need it?

agents.md is the team-level configuration file that sits in your workspace root. While each agent's SOUL.md defines individual behavior (role, rules, tone), agents.md defines team behavior — who is on the team, what each agent's responsibilities are, how tasks flow between them, and what shared context all agents can access. Without agents.md, each agent operates in isolation and does not know about its teammates.

How do OpenClaw agents hand off work to each other?

Agents hand off work by mentioning another agent with @AgentName in their response. The OpenClaw gateway intercepts this mention and routes the message to the named agent, along with the relevant context. For example, after a writer finishes a draft, it can say '@Radar please review this draft for SEO' and the gateway delivers that message to the Radar agent. Handoff rules are defined in each agent's SOUL.md Handoffs section so the agent knows when to delegate.

Can OpenClaw agents share memory and context?

Yes. OpenClaw supports two forms of shared memory. First, agents.md provides a static shared context that all agents read at startup — team goals, communication rules, shared facts. Second, agents can write to a shared MEMORY.md file in the workspace, which other agents can read. This allows one agent to store research findings that another agent picks up later, enabling true context passing across long-running workflows.

What are the most common mistakes when setting up multi-agent workflows?

The three most common mistakes are: (1) Circular handoffs — Agent A waits for Agent B who is waiting for Agent A. Fix this by defining a clear flow direction in agents.md. (2) Context loss — an agent receives a handoff message but lacks the background to act on it. Fix this by including full context in every handoff message, not just a brief note. (3) Model mismatch — using a slow, expensive model for simple routing tasks. Fix this by using cheap/fast models (Haiku, GPT-4o mini) for coordination and expensive models (Sonnet, GPT-4o) only for writing and analysis.

Build your agent team visually with CrewClaw

CrewClaw lets you configure multi-agent teams with a visual builder. Pick roles, set handoffs, deploy.

Create Your Agent Team

Build Your AI Agent Now

Design, test with real AI, and export a production-ready deploy package. Docker, Telegram, Discord & Slack bots included.

Open Agent Designer

Free to design. No credit card required.