OpenClaw System Requirements 2026: Complete Hardware and Cloud Specs Guide
Before you install OpenClaw, you need to know if your hardware can handle it. This guide covers every scenario: running a single agent on a $4 VPS, orchestrating a 10-agent team on a dedicated server, and running local models with Ollama on a GPU rig. Actual numbers, real benchmarks, no guesswork.
How OpenClaw Uses System Resources
OpenClaw is a Node.js application that acts as an orchestration layer between you and language model APIs. The framework itself is lightweight. The gateway process, which routes messages between agents, channels, and model providers, consumes between 80 and 150 MB of RAM at idle. Each active agent session adds roughly 20-40 MB on top of that.
The real resource demands come from two places: the number of concurrent agents you run, and whether you use local models via Ollama instead of cloud APIs. A single-agent setup with Anthropic Claude as the provider can run on almost anything. A five-agent team with local Llama 3.1 70B running through Ollama needs serious hardware.
Here is what OpenClaw actually does with your system resources:
- CPU: Minimal usage during idle. Spikes briefly when parsing agent responses, executing skills, and processing tool calls. Multi-agent setups with concurrent conversations create moderate sustained CPU load.
- RAM: The gateway, agent sessions, context windows, and skill execution all live in memory. Each agent holds its conversation history in RAM during active sessions.
- Disk: Agent configurations (SOUL.md, AGENTS.md, HEARTBEAT.md), session logs, skill scripts, and the Node.js runtime. Minimal unless you enable persistent session logging.
- Network: API calls to model providers. Each message exchange is 1-50 KB depending on context length. The gateway listens on port 18789 by default.
- GPU: Only required if running local models through Ollama. OpenClaw itself never touches the GPU.
Minimum System Requirements
These are the absolute minimum specs to run OpenClaw with a single agent using a cloud API provider (Anthropic, OpenAI, or Google Gemini). Below these thresholds, you will hit out-of-memory errors, failed skill executions, or gateway crashes.
| Component | Minimum | Notes |
|---|---|---|
| OS | macOS 13+, Ubuntu 20.04+, Debian 11+ | Windows requires WSL 2. ARM64 and x86_64 both supported. |
| Node.js | v22.0.0 or later | Hard requirement. v20 and below will fail on startup. Use nvm to manage versions. |
| CPU | 1 core (x86_64 or ARM64) | Any modern processor from 2015 onwards. Single-threaded performance matters more than core count for single-agent setups. |
| RAM | 1 GB | Gateway uses ~150 MB. OS needs ~500 MB. Leaves ~350 MB for agent sessions and skills. |
| Disk | 2 GB free | Node.js runtime (~300 MB), OpenClaw + dependencies (~200 MB), agent configs and logs (~100 MB), headroom for session data. |
| Network | Stable internet connection | Required for cloud API calls. 1 Mbps upload/download is sufficient. Latency under 200ms to API endpoints recommended. |
The Node.js 22 requirement is non-negotiable. OpenClaw uses ES module syntax, the built-in fetch API, and structured clone features that landed in Node 22. If you try to run it on Node 18 or 20, you will get immediate syntax errors on startup. Install Node 22 via nvm:
nvm install 22
nvm use 22
node --version # should show v22.x.xRecommended Specs by Use Case
Minimum specs keep things running. Recommended specs keep things fast. Here is what you actually want for each common deployment pattern.
Single Agent with Cloud API
Running one agent (for example, a SOUL.md-based SEO analyst connected to Telegram) with Anthropic Claude or OpenAI as the provider. This is the most common setup for solo operators.
- CPU: 1-2 cores
- RAM: 1-2 GB
- Disk: 5 GB
- Estimated gateway RAM usage: 150-250 MB
- Monthly hosting cost: $4-6 on DigitalOcean or Hetzner
Multi-Agent Team (3-5 Agents) with Cloud API
Running a coordinated team like Orion (PM), Echo (Writer), and Radar (SEO Analyst) with shared context and agent-to-agent communication. The gateway handles routing between agents, and each agent maintains its own session state.
- CPU: 2-4 cores
- RAM: 4 GB
- Disk: 10 GB
- Estimated gateway RAM usage: 400-800 MB
- Monthly hosting cost: $12-24
Large Team (6-10+ Agents) with Cloud API
Production deployments with many concurrent agents, heartbeat monitoring, persistent sessions, and high message throughput. This is where CPU core count starts to matter because the Node.js event loop handles multiple concurrent API calls and skill executions.
- CPU: 4-8 cores
- RAM: 8-16 GB
- Disk: 20-50 GB (session logs grow fast)
- Estimated gateway RAM usage: 1-3 GB
- Monthly hosting cost: $24-48
Single Agent with Ollama (Local Models)
Running one agent backed by a local model through Ollama. No API costs, full privacy, but serious hardware requirements. The model runs in a separate Ollama process alongside the OpenClaw gateway.
- CPU: 4+ cores (8+ recommended for CPU-only inference)
- RAM: 16 GB minimum, 32 GB recommended
- GPU VRAM: 8 GB minimum (RTX 3060 12 GB or better)
- Disk: 20-80 GB (model weights are large)
- Monthly hosting cost: $0 (your own hardware) or $30-80 for a GPU cloud instance
GPU and Ollama Requirements
If you want to run OpenClaw agents with zero API costs, Ollama is the way. But local model inference is resource-intensive. Here is exactly what you need for each popular model.
| Model | Parameters | VRAM Required | RAM (CPU-only) | Disk (Model File) |
|---|---|---|---|---|
| Qwen 2.5 7B | 7B | 6 GB | 10 GB | 4.7 GB |
| Llama 3.1 8B | 8B | 6 GB | 10 GB | 4.9 GB |
| Mistral 7B | 7B | 6 GB | 10 GB | 4.1 GB |
| Llama 3.1 70B (Q4) | 70B | 40 GB | 48 GB | 39 GB |
| Qwen 2.5 72B (Q4) | 72B | 42 GB | 52 GB | 41 GB |
| DeepSeek-V3 (Q4) | 671B MoE | 80+ GB (multi-GPU) | Not practical | ~200 GB |
For most OpenClaw users running local models, the sweet spot is a 7B-8B parameter model on an NVIDIA GPU with 8-12 GB VRAM. An RTX 3060 12 GB, RTX 4060 Ti 16 GB, or Apple M2 Pro/Max with unified memory handles these models well.
Apple Silicon users get an advantage here. The M1 Pro, M2 Pro, M3 Pro, and their Max variants use unified memory, meaning the system RAM doubles as VRAM. A MacBook Pro with 32 GB unified memory can run a quantized 70B model entirely in memory. This makes Mac Mini M4 Pro an excellent always-on OpenClaw server with local models.
Configure Ollama as your provider in OpenClaw:
openclaw config set provider ollama
openclaw config set model qwen2.5:7b
openclaw config set ollamaHost http://localhost:11434
# Verify Ollama is running and the model is pulled
ollama list
ollama run qwen2.5:7b "Hello, are you working?"Cloud VPS Requirements and Sizing
Running OpenClaw on a cloud VPS is the most common production setup. You get a static IP, 24/7 uptime, and no dependency on your local machine staying powered on. Here are concrete recommendations for the most popular providers.
| Use Case | DigitalOcean | Hetzner | Monthly Cost |
|---|---|---|---|
| 1 agent, cloud API | Basic Droplet (1 vCPU, 1 GB RAM) | CX22 (2 vCPU, 4 GB RAM) | $4-6 |
| 3-5 agents, cloud API | Regular Droplet (2 vCPU, 4 GB RAM) | CX32 (4 vCPU, 8 GB RAM) | $12-24 |
| 6-10+ agents, cloud API | CPU-Optimized (4 vCPU, 8 GB RAM) | CX42 (8 vCPU, 16 GB RAM) | $24-48 |
| 1 agent + Ollama 7B | GPU Droplet (not available) | Dedicated + external GPU provider | $30-80 |
Hetzner consistently offers the best price-to-performance ratio for OpenClaw deployments in Europe. Their CX22 at 4.35 EUR/month gives you 2 vCPU and 4 GB RAM, which is overkill for a single cloud-API agent but gives you room to grow. DigitalOcean is the better choice if your API providers have US-based endpoints and you want lower latency.
A typical VPS setup takes under 5 minutes:
# On a fresh Ubuntu 22.04 VPS
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
nvm install 22
npm install -g openclaw
# Start the gateway in the background
openclaw gateway start --daemon
# Verify
openclaw gateway statusDocker Requirements
Running OpenClaw in Docker is ideal for reproducible deployments and CI/CD pipelines. The Docker image includes Node.js 22 and the OpenClaw runtime. Here is what the host machine needs.
- Docker Engine: v24.0 or later (v25+ recommended for BuildKit improvements)
- Docker Compose: v2.20 or later (if using compose files)
- Host RAM overhead: Add 200-300 MB on top of agent requirements for the Docker daemon
- Disk for images: The OpenClaw base image is ~450 MB. Add model files if using Ollama.
- NVIDIA Container Toolkit: Required only if passing GPU to the container for Ollama
A minimal Docker Compose configuration for a cloud-API agent:
version: "3.8"
services:
openclaw:
image: openclaw/openclaw:latest
ports:
- "18789:18789"
volumes:
- ./agents:/app/agents
- openclaw-data:/root/.openclaw
environment:
- ANTHROPIC_API_KEY=sk-ant-...
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
volumes:
openclaw-data:For Ollama inside Docker, you need the NVIDIA Container Toolkit and GPU passthrough:
services:
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama-models:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
openclaw:
image: openclaw/openclaw:latest
ports:
- "18789:18789"
volumes:
- ./agents:/app/agents
environment:
- OLLAMA_HOST=http://ollama:11434
depends_on:
- ollamaNetwork Requirements
OpenClaw's network requirements depend entirely on your setup. Cloud API agents need reliable internet. Fully local setups with Ollama can run completely offline.
- Bandwidth: 1 Mbps symmetric is sufficient for cloud API agents. Each API call exchanges 1-50 KB of data. Even heavy multi-agent setups rarely exceed 100 KB/s sustained.
- Latency: Under 200ms to your API provider is ideal. Anthropic and OpenAI endpoints are in US-East. If you are hosting in Europe or Asia, expect 100-300ms additional latency per API call.
- Ports: The gateway listens on port 18789 (configurable). Outbound HTTPS (443) is needed for API calls. Inbound ports for Telegram webhooks, Slack events, or Discord bots depend on your channel integrations.
- Firewall: Allow outbound TCP 443 to api.anthropic.com, api.openai.com, generativelanguage.googleapis.com. Allow inbound TCP on your gateway port if exposing agents via channels.
- Offline mode: With Ollama running locally, no internet is needed for inference. You still need internet for initial setup (npm install, model download) and for channel integrations (Telegram, Slack).
Full Comparison: Requirements by Setup Type
This table summarizes every setup type side by side. Use it as a quick reference when deciding what hardware or VPS plan you need.
| Setup | CPU | RAM | Disk | GPU | Cost/mo |
|---|---|---|---|---|---|
| 1 agent + cloud API | 1 core | 1 GB | 2 GB | None | $4-6 |
| 3-5 agents + cloud API | 2-4 cores | 4 GB | 10 GB | None | $12-24 |
| 10+ agents + cloud API | 4-8 cores | 8-16 GB | 20-50 GB | None | $24-48 |
| 1 agent + Ollama 7B | 4+ cores | 16 GB | 20 GB | 8 GB VRAM | $0-80 |
| 1 agent + Ollama 70B | 8+ cores | 32-64 GB | 80 GB | 40+ GB VRAM | $0-150 |
| Raspberry Pi 4 (8 GB) | 4 cores ARM | 8 GB | 32 GB SD | None | $0 (own hw) |
| Mac Mini M4 Pro (local) | 12 cores | 24-48 GB unified | 256+ GB SSD | Unified (18-core GPU) | $0 (own hw) |
Performance Benchmarks and Optimization Tips
Real numbers from production OpenClaw deployments. These benchmarks were measured on common hardware configurations.
Gateway Startup Time
- 1 agent loaded: 1.2 seconds
- 5 agents loaded: 2.8 seconds
- 10 agents loaded: 5.1 seconds
- Startup time scales linearly with agent count. Each agent adds roughly 0.4 seconds as the gateway reads and parses SOUL.md, AGENTS.md, and skill files.
Memory Usage Under Load
- Gateway idle (no active sessions): 80-120 MB
- 1 active conversation (short context): 150-200 MB
- 1 active conversation (long context, 50+ messages): 250-400 MB
- 5 concurrent conversations: 600 MB - 1.2 GB
- 10 concurrent conversations: 1.5-3 GB
Tips to Reduce Resource Usage
- Clear sessions regularly. Delete ~/.openclaw/agents/*/sessions/sessions.json to free RAM from stale conversation context. This is the single biggest memory saver.
- Use shorter SOUL.md files. Every token in your SOUL.md is sent with every API call. A 2,000-token SOUL.md versus a 500-token one means 4x more data per request. Keep system prompts concise.
- Disable heartbeat if not needed. The heartbeat feature runs periodic checks, consuming CPU cycles and API calls. Disable it for agents that only need to respond on demand.
- Use quantized models with Ollama. Q4_K_M quantization gives 90-95% of the quality of full precision at roughly half the VRAM and RAM requirements. Always prefer quantized models for local deployment.
- Set memory limits in Docker. Use deploy resource limits to prevent a runaway agent from consuming all host RAM. Set the limit to 2x your expected peak usage.
- SSD over HDD. Session reads and writes are frequent. An NVMe SSD versus a spinning disk makes a noticeable difference in agent response time, especially when loading large session histories.
Example: Lightweight Agent SOUL.md for Low-Spec Machines
If you are running on constrained hardware (Raspberry Pi, $4 VPS, or an old laptop), keep your agent configuration minimal. Here is a lean SOUL.md that keeps token usage and memory footprint low:
# SOUL.md - Lightweight SEO Monitor
## Identity
You are a concise SEO monitoring agent.
## Rules
- Respond in 3 sentences or fewer
- Never generate long-form content
- Report only actionable findings
- Use bullet points, not paragraphs
## Skills
- Check Google Search Console for ranking drops
- Alert on 404 errors via sitemap monitoring
- Weekly keyword position summary
## Output Format
[METRIC]: [VALUE] ([CHANGE])
[ACTION NEEDED]: [Yes/No]
[DETAIL]: One sentence max.This SOUL.md is under 150 tokens. Compare that to a full-featured agent SOUL.md that might be 800-2,000 tokens. On a 1 GB VPS, this difference matters because the system prompt is included in every API call's context window, affecting both latency and memory during response parsing.
Frequently Asked Questions
What are the absolute minimum specs to run OpenClaw?
The bare minimum is a machine with 1 CPU core, 1 GB of RAM, 2 GB of free disk space, and Node.js 22 installed. This is enough to run a single agent using a cloud API provider like Anthropic or OpenAI. Any modern Linux VPS, Raspberry Pi 4, or old laptop can handle this. The gateway process itself uses around 80-150 MB of RAM at idle.
Do I need a GPU to run OpenClaw?
No. OpenClaw itself does not use a GPU at all. GPUs are only needed if you want to run local language models through Ollama. For cloud API providers (Anthropic Claude, OpenAI GPT, Google Gemini), your hardware only needs to handle the lightweight Node.js gateway process. The heavy computation happens on the provider's servers.
Can I run OpenClaw on a Raspberry Pi?
Yes. A Raspberry Pi 4 with 4 GB or 8 GB RAM runs OpenClaw well for cloud API agents. The ARM64 architecture is fully supported. Install Node.js 22 via nvm, and you can run 2-3 agents comfortably on a Pi 4 8 GB model. Running local models on a Pi is not practical due to the lack of GPU acceleration and limited RAM.
How much does it cost to run OpenClaw on a cloud VPS?
For a single agent using cloud APIs, a $4-6/month VPS (1 vCPU, 1 GB RAM) from DigitalOcean or Hetzner is sufficient. Multi-agent teams need $12-24/month (2-4 vCPU, 4-8 GB RAM). Running local models via Ollama on a GPU cloud instance costs $30-150/month depending on VRAM. Most solo operators spend under $10/month on hosting.
Does OpenClaw work on Windows natively?
OpenClaw requires WSL 2 (Windows Subsystem for Linux) on Windows. Native Windows support is experimental and has known issues with file paths and certain skills. Install WSL 2 from the Microsoft Store, set up Ubuntu 22.04 or later, install Node.js 22 inside WSL, and follow the standard Linux setup. Performance inside WSL 2 is near-native.
Skip the Setup Hassle
Get a pre-configured OpenClaw agent package with optimized SOUL.md files, ready-to-run skills, and deployment scripts. Working in 60 seconds.
Deploy a Ready-Made AI Agent
Skip the setup. Pick a template and deploy in 60 seconds.