Why Backends Matter
When an LLM connects to your MCP server, it callslist_tools() to discover what’s available. The backend determines what comes back:
The choice of backend directly affects:
| Factor | Impact |
|---|---|
| Token cost | More tools in context = more tokens per request |
| Accuracy | Fewer choices = LLM picks the right tool more often |
| Determinism | Structured execution = more predictable outcomes |
| Composability | Code/Plan modes let the LLM chain tools in a single turn |
Available Backends
Plain
All tools exposed directly. Simple, no wrapping. Best for small APIs.
Search
Semantic search over tools. Agent discovers then calls. Best for large APIs.
Plan
Agent submits a JSON execution plan. Sequential steps with data passing. Best for workflows.
Code
Agent writes Python to call tools. Maximum efficiency. Best for complex logic.
Configuring a Backend
Plain. Change it by passing ProviderType.SEARCH, ProviderType.PLAN, or ProviderType.CODE.
Comparison
| Backend | LLM sees | Context cost | Best for |
|---|---|---|---|
| Plain | All tools directly | High | Under 20 tools |
| Search | search_tools + call_tool | Medium | 100+ tools |
| Plan | execute_plan | Low | Multi-step workflows |
| Code | execute_code | Minimal | Complex logic, iteration |
Backends + Stages
Backends work on top of stages. If you define stages, the backend only operates on the tools visible in the current stage:not all tools:browse stage with Code mode, tools.pay() raises an error:it’s only available after transitioning to checkout.