_ _ __ ___ _ ___ _____ __| | | '_ ` _ \| | | \ \/ / _ \/ _` | | | | | | | |_| |> < __/ (_| | |_| |_| |_|\__,_/_/\_\___|\__,_|
Every MCP server you add dumps schemas into context, and the model spends tokens figuring out tools instead of doing your task. muxed moves tools to a background daemon and discovers them lazily — via shell or Node.js API.
Every MCP server dumps its full schema into the prompt. The context fills up with tool definitions, and your agent stops following the instructions you actually wrote.
Anthropic found that intermediate tool results waste almost all tokens flowing through the model.
A standard MCP server setup fills a quarter of the context window before the agent even starts.
Tool selection accuracy collapses after just a handful of connected MCP servers.
The model is smart enough. It's the context that's broken — every token spent on schemas is a token not spent following your instructions.
Skills, prompts, and default tools are deterministic — models always execute them. MCP tools compete for attention, and the more you add, the less your designed trajectories get followed.
Offloading tools to muxed is context engineering at the infrastructure level. Skills load first, the right MCP tools get called from there — exactly the trajectory you designed.
Tools live in the daemon, not the prompt. Agents find what they
need with muxed grep — no schema preloading, no token waste.
Clean context means skills and prompts get followed. Your agent loads instructions first, then calls the right tool.
Configure once, use everywhere. muxed reads your existing agent configs — zero per-agent setup.
Chain MCP calls through bash pipes or Node.js. Automate routines without burning tokens on repeated tool calls.
Instead of loading every schema into the prompt, agents search for the tool they need, inspect it, and call it. The model's context stays clean — only the final result goes in.
1 # find the tool you need – not all 200 2 $ muxed grep "read" 3 filesystem/read_file Read file contents 4 postgres/query Read data via SQL 5 6 # inspect its schema 7 $ muxed info filesystem/read_file 8 path (string, required) File path to read 9 10 # call it – only the result enters the LLM 11 $ muxed call filesystem/read_file \ 12 '{"path": "/tmp/config.json"}'
A typical MCP setup burns a third of the context on tool schemas the agent may never use. muxed moves them out entirely — freeing space for the skills, prompts, and conversation that actually drive results.
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶
Stop burning tokens on the same routine tasks. Chain MCP calls through bash pipes or write Node.js scripts — intermediate data never re-enters the model.
Same insight behind Anthropic's code execution and Cloudflare's Code Mode — as a CLI and npm package.
1 # query → filter → write. model never sees the raw 10k rows. 2 $ muxed call postgres/query \ 3 '{"sql":"SELECT * FROM users"}' --json \ 4 | jq '[.[] | select(.active)]' \ 5 | muxed call filesystem/write_file \ 6 '{"path":"/tmp/active.json"}' -
muxed is also an npm package. Agents write Node.js scripts with
typed results, Promise.all for parallel calls across servers, and the full npm ecosystem.
1 import { createClient } from 'muxed'; 2 3 const client = await createClient(); 4 5 // query churned customers from PostHog 6 const churn = await client.call('posthog/query-run', { 7 query: { kind: 'HogQLQuery', query: 'SELECT email FROM persons' } 8 }); 9 10 // pull support history in parallel 11 const history = await Promise.all( 12 customers.map(c => 13 client.call('intercom/search-conversations', { 14 query: c.email, limit: 5 15 }) 16 ) 17 ); 18 19 // output the results 20 console.log(history);
Auto-discover your MCP servers, start the daemon, and run your first tool call.
npx muxed init generates config from your existing setup muxed tools lists every tool across all servers muxed call server/tool '{}' invoke any tool from the command line