MCP Tool Mirroring Guide

Auto-generate mock implementations from MCP server tool schemas. ToolMirror discovers tools from an MCP server and creates mock implementations without manual coding.


Installation

Install the optional MCP support before using live mirroring:

pip install "stuntdouble[mcp]"

Overview

MCP (Model Context Protocol) Tool Mirroring allows you to:

  1. Discover tools from any MCP server (stdio or HTTP)

  2. Generate mock implementations automatically from schemas

  3. Use the mocked tools in your AI agent tests

┌─────────────────────────────────────────────────────────────────────────────┐
│                         MCP Mirroring Flow                                  │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   1. Discovery                  2. Generation               3. Usage       │
│   ────────────                  ─────────────               ──────         │
│                                                                             │
│   ┌─────────────┐          ┌─────────────────┐        ┌──────────────┐     │
│   │ MCP Server  │          │ MockGenerator   │        │ AI Agent     │     │
│   │             │          │                 │        │              │     │
│   │ tools/list  │──────▶   │ Schema analysis │──────▶ │ bind_tools() │     │
│   │             │          │ Type inference  │        │              │     │
│   │ - tool_a    │          │ Mock creation   │        │ or           │     │
│   │ - tool_b    │          │                 │        │ ToolNode()   │     │
│   │ - tool_c    │          │ Optional: LLM   │        │              │     │
│   └─────────────┘          └─────────────────┘        └──────────────┘     │
│                                                                             │
│   Supported transports: stdio (subprocess), HTTP (remote server)           │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

When to Use Mirroring

Scenario

Recommendation

Testing agent with MCP tools

✅ Mirror tools for fast, offline testing

CI/CD pipelines

✅ Use ToolMirror.for_ci() for deterministic mocks

Development without MCP server

✅ Mirror once, develop offline

Need realistic mock data

✅ Use ToolMirror.with_llm()

Production

❌ Use real MCP tools


Quick Start

Basic Mirroring (stdio)

from stuntdouble.mirroring import ToolMirror

# Create mirror instance
mirror = ToolMirror()

# Mirror tools from MCP server (subprocess)
mirror.mirror(["python", "-m", "my_mcp_server"])

# Get LangChain-compatible tools
tools = mirror.to_langchain_tools()

# Use with agent
agent = llm.bind_tools(tools)
response = agent.invoke("Create an invoice for customer CUST-001")

HTTP Server Mirroring

from stuntdouble.mirroring import ToolMirror

mirror = ToolMirror()

# Mirror from HTTP MCP server
mirror.mirror(http_url="http://localhost:8080/mcp")

tools = mirror.to_langchain_tools()

One-Line Convenience Functions

from stuntdouble.mirroring import mirror, mirror_for_agent

# One-line mirroring (returns result dict)
result = mirror(["python", "-m", "my_server"])
print(f"Mirrored {result['mirrored_count']} tools: {result['tools']}")

# Mirror and get tools ready for agent
tools = mirror_for_agent(llm, ["python", "-m", "my_server"])
agent = llm.bind_tools(tools)

LLM-Powered Generation

For realistic, contextual mock data, use an LLM to generate responses:

from langchain_openai import ChatOpenAI
from stuntdouble.mirroring import ToolMirror

llm = ChatOpenAI(model="gpt-4o")
mirror = ToolMirror.with_llm(llm)

mirror.mirror(["python", "-m", "financial_mcp"])
tools = mirror.to_langchain_tools()

# Tools return realistic, contextual mock data
# e.g., create_invoice returns realistic invoice data

Quality Presets

from stuntdouble.mirroring import ToolMirror, QualityPreset

# High quality - uses LLM for all responses
mirror = ToolMirror.with_llm(llm, quality=QualityPreset.HIGH)

# Balanced quality - parameter-aware static generation
mirror = ToolMirror.with_llm(llm, quality=QualityPreset.BALANCED)

# Fast - static responses only (no LLM calls)
mirror = ToolMirror.for_ci()

HTTP Authentication

Mirror tools from remote MCP servers that require authentication:

from stuntdouble.mirroring import ToolMirror

mirror = ToolMirror()

# Bearer token authentication
result = mirror.mirror(
    http_url="https://api.example.com/mcp",
    headers={"Authorization": "Bearer your-token-here"}
)

# API key authentication
result = mirror.mirror(
    http_url="http://localhost:8080",
    headers={
        "X-API-Key": "abc123",
        "X-Client-ID": "my-app"
    }
)

# Multiple custom headers
result = mirror.mirror(
    http_url="http://internal.company.com/mcp",
    headers={
        "Authorization": "Bearer token",
        "X-Request-ID": "unique-id",
        "X-Environment": "production"
    }
)

tools = mirror.to_langchain_tools()

Security Features

Headers are automatically included in all HTTP requests:

  • SSE connection establishment (GET /sse)

  • JSON-RPC message calls (POST /messages)

StuntDouble provides built-in security:

Feature

Description

Header Validation

Headers must be a dict with string keys/values

Sensitive Data Protection

Headers are redacted in logs and errors

Protocol Compliance

Content-Type: application/json auto-enforced

Transport Validation

Warning if headers specified for stdio


Customizing Mock Data

Override generated mock data for specific tools:

from stuntdouble.mirroring import ToolMirror

mirror = ToolMirror()
mirror.mirror(["python", "-m", "my_server"])

# Customize specific tool responses
mirror.customize("get_customer", {
    "id": "CUST-001",
    "name": "Custom Test Corp",
    "tier": "platinum"
})

# List all mirrored tools
print(mirror.list_mirrors())
# [MirrorInfo(...), MirrorInfo(...), ...]

tools = mirror.to_langchain_tools()

CI/CD Optimization

For CI/CD pipelines, use the optimized factory:

from stuntdouble.mirroring import ToolMirror

# Optimized for CI: deterministic, no LLM calls, fast
mirror = ToolMirror.for_ci()
mirror.mirror(["python", "-m", "my_server"])
tools = mirror.to_langchain_tools()

Caching

Enable caching for faster subsequent runs:

mirror = ToolMirror().enable_caching(ttl_minutes=30)
mirror.mirror(["python", "-m", "my_server"])
# Second call uses cached tool definitions
mirror.mirror(["python", "-m", "my_server"])

API Reference

ToolMirror Class

Method

Description

ToolMirror()

Create mirror with default settings

ToolMirror.with_llm(llm, quality=)

Create with LLM-powered generation

ToolMirror.for_ci()

Create mirror optimized for CI/CD

ToolMirror.for_langgraph(registry=None, quality=)

Create mirror that also registers mirrored mocks in a LangGraph registry

mirror.enable_caching(ttl_minutes=)

Enable response caching

mirror.enable_llm(llm)

Add LLM generation to existing mirror

Mirroring Methods

Method

Description

mirror.mirror(server_command=, http_url=, headers=)

Mirror tools from MCP server

mirror.to_langchain_tools()

Convert to LangChain StructuredTool format

mirror.customize(tool_name, data)

Override mock data for a tool

mirror.list_mirrors()

List mirrored tools as MirrorInfo objects

mirror.list_mirrors_by_server(server_name)

Filter mirrored tools by source server

mirror.get_stats()

Return cache and generation statistics

mirror.unregister(tool_name)

Remove a mirrored tool

Convenience Functions

from stuntdouble.mirroring import mirror, mirror_for_agent

# Mirror and get result dict
result = mirror(server_command=["python", "-m", "server"])
# Returns: {'mirrored_count': 5, 'tools': [...], 'server_name': '...'}

# Mirror and get tools ready for agent
tools = mirror_for_agent(llm, server_command=["python", "-m", "server"])

Integration with Approaches

MCP Tool Mirroring works with both StuntDouble approaches:


See Also