One provider, all your agents
Add ClawPane to
OpenClaw in minutes.
In OpenClaw go to Settings → Model Providers → Add Provider. Paste your ClawPane URL and API key. Every agent in your gateway instantly gets smart routing.
See full setup guide →What you get
Everything routing.
Nothing extra.
Automatic model selection inside OpenClaw
Every OpenClaw request is scored against cost, latency, quality, and carbon footprint. The router picks the winner — you never touch a model name in your agent config again.
Per-router weight tuning
Create multiple routers with different objectives. Route support agents through a cost-first config, code agents through quality-first — all from the same OpenClaw gateway.
Drop-in OpenClaw provider
Add ClawPane as a provider in OpenClaw's Settings → Model Providers. One URL, one API key. All your existing agents and tools keep working.
Agent-native routing
OpenClaw agents can dynamically switch routing strategy mid-conversation. No static model config required — the router adapts to each request.
Real-time cost visibility
Every response includes metadata with the selected model, cost, latency, and environmental impact. See exactly what ran inside OpenClaw and what it cost.
Automatic fallback chains
If a provider is down or rate-limited, the router tries the next best option automatically. Your OpenClaw agents complete even when individual providers fail.
Setup in 3 steps
Live inside OpenClaw
in under 5 minutes.
Create a router
Set your optimization weights — cost, speed, quality, carbon. Use a preset or dial in custom values for your OpenClaw workload.
Create router →Get your API key
Generate a ClawPane API key from the dashboard. It works across all your routers.
Go to settings →Add it to OpenClaw
In OpenClaw go to Settings → Model Providers → Add Provider. Set the URL, paste your key, and pick a model preset.
OpenClaw setup guide →Preset routers
Start with a preset.
Tune from there.
Four built-in routing strategies cover most workloads out of the box. Clone any preset and adjust the weights for your specific use case.
Cost, speed, and quality in equal measure.
Lowest latency above all else.
Cheapest viable model for every request.
Highest-scoring model regardless of cost.
Open source
The routing algorithm is open.
The technology that decides which model handles each request is published openly. Anyone can read how decisions are made, verify the logic, or contribute. The routing data we've collected over time stays proprietary — but the algorithm itself is yours to inspect.
New
Debate Mode — 3 models, 1 best answer.
For high-stakes decisions where accuracy matters more than cost. Debate mode sends your request to 3 models from different families in parallel — GPT, Claude, Gemini — then an arbitrator synthesizes the best possible answer from all three responses.
- Diverse reasoning — panelists are auto-selected from distinct model families
- Intelligent arbitration — a top-tier model evaluates and synthesizes the final answer
- ~4× cost — 3 panelists + 1 arbitrator per request, best for critical queries
Enable debate mode globally on your router or select the “Debate” preset when creating a new router.
Ready?
