MCP tools¶
MCP tools are the primary output of Spectral. They let AI agents call any discovered API directly — no browser automation, no manual integration code, no protocol-specific knowledge required.
Prerequisites¶
- At least one capture for the app (see Capture)
- An Anthropic API key (set via
ANTHROPIC_API_KEYor stored on first prompt)
Generating tools¶
Run mcp analyze with the app name:
spectral mcp analyze myapp
This produces one tool per business capability discovered in the captures. Tools are stored in managed storage at ~/.local/share/spectral/apps/<app>/tools/.
If the app requires authentication, set it up before using the tools:
spectral auth analyze myapp
spectral auth login myapp
The --skip-enrich flag skips business description generation for faster iteration. The --model flag overrides the default LLM model. The --debug flag saves all LLM prompts and responses to disk.
Connecting the MCP server¶
Register the server with your MCP client:
spectral mcp install
This auto-detects Claude Desktop and Claude Code and registers the server with each. Use --target claude-desktop or --target claude-code to install to a specific client only.
For other MCP clients, add a stdio server entry with the command spectral mcp stdio. For example, in a JSON config file:
{
"mcpServers": {
"spectral": {
"command": "spectral",
"args": ["mcp", "stdio"]
}
}
}
Authentication¶
For tools that require authentication, the server automatically manages tokens:
- If a valid (non-expired) token exists in managed storage, its headers are injected into the request.
- If the token has expired but a refresh token is available, the server auto-refreshes before making the request.
- If no valid token is available, the server returns an error instructing the user to run
spectral auth login.
AI agents never need to handle authentication themselves — the server takes care of it transparently.
Re-running analysis¶
You can re-run mcp analyze at any time. New captures are merged with previous ones, and tool definitions are overwritten. This lets you iteratively expand coverage by capturing more workflows and re-analyzing.
How it works¶
Each tool maps a business operation (like "search parking areas" or "get account balance") to an HTTP request template. Tools are protocol-agnostic: the same format works for REST, GraphQL, REST.li, custom RPC, or any other protocol over HTTP.
The mcp analyze pipeline processes each trace greedily:
- All captures are loaded and merged into a single bundle.
- The LLM identifies the business API origin, filtering out CDN, analytics, and tracker domains.
- For each trace, a lightweight LLM call classifies it as a useful business capability or not (static assets, config endpoints, health checks are skipped).
- For useful traces, a full LLM call builds the tool definition — HTTP method, path pattern, headers, parameters, request body template — using investigation tools (base64/URL/JWT decoding, trace inspection, schema inference).
- Once a trace is claimed by a tool, it is removed from the working set.
When a tool is called at runtime, the server validates arguments against the tool's JSON Schema (with type coercion), resolves parameter placeholders in the request template, injects auth headers if needed, and makes the HTTP request.