This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph.

[!note] A JavaScript/TypeScript version of this library is also available at langchainjs.
pip install langchain-mcp-adapters
Here is a simple example of using the MCP tools with a LangGraph agent.
pip install langchain-mcp-adapters langgraph "langchain[openai]"
export OPENAI_API_KEY=<your_api_key>
First, let's create an MCP server that can add and multiply numbers.
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langchain.agents import create_agent
server_params = StdioServerParameters(
command="python",
# Make sure to update to the full absolute path to your math_server.py file
args=["/path/to/math_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await load_mcp_tools(session)
# Create and run the agent
agent = create_agent("openai:gpt-4.1", tools)
agent_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
The library also allows you to connect to multiple MCP servers and load tools from them:
# math_server.py
...
# weather_server.py
from typing import List
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Weather")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather for location."""
return "It's always sunny in New York"
if __name__ == "__main__":
mcp.run(transport="http")
python weather_server.py
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
# Make sure you start your weather server on port 8000
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
[!note] Example above will start a new MCP
ClientSessionfor each tool invocation. If you would like to explicitly start a session for a given server, you can do:from langchain_mcp_adapters.tools import load_mcp_tools client = MultiServerMCPClient({...}) async with client.session("math") as session: tools = await load_mcp_tools(session)
MCP now supports streamable HTTP transport.
To start an example streamable HTTP server, run the following:
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
Alternatively, you can use FastMCP directly (as in the examples above).
To use it with Python MCP SDK streamablehttp_client:
# Use server from examples/servers/streamable-http-stateless/
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from langchain.agents import create_agent
from langchain_mcp_adapters.tools import load_mcp_tools
async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await load_mcp_tools(session)
agent = create_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
Use it with MultiServerMCPClient:
# Use server from examples/servers/streamable-http-stateless/
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient(
{
"math": {
"transport": "http",
"url": "http://localhost:3000/mcp"
},
}
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
When connecting to MCP servers, you can include custom headers (e.g., for authentication or tracing) using the headers field in the connection configuration. This is supported for the following transports:
ssehttp (or streamable_http)MultiServerMCPClientfrom langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient(
{
"weather": {
"transport": "http",
"url": "http://localhost:8000/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN",
"X-Custom-Header": "custom-value"
},
}
}
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
Only
sseandhttptransports support runtime headers. These headers are passed with every HTTP request to the MCP server.
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-4.1")
client = MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["./examples/math_server.py"],
"transport": "stdio",
},
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
tools = await client.get_tools()
def call_model(state: MessagesState):
response = model.bind_tools(tools).invoke(state["messages"])
return {"messages": response}
builder = StateGraph(MessagesState)
builder.add_node(call_model)
builder.add_node(ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges(
"call_model",
tools_condition,
)
builder.add_edge("tools", "call_model")
graph = builder.compile()
math_response = await graph.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await graph.ainvoke({"messages": "what is the weather in nyc?"})
[!TIP] Check out this guide on getting started with LangGraph API server.
If you want to run a LangGraph agent that uses MCP tools in a LangGraph API server, you can use the following setup:
# graph.py
from contextlib import asynccontextmanager
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
async def make_graph():
client = MultiServerMCPClient(
{
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/mcp",
"transport": "http",
},
# ATTENTION: MCP's stdio transport was designed primarily to support applications running on a user's machine.
# Before using stdio in a web server context, evaluate whether there's a more appropriate solution.
# For example, do you actually need MCP? or can you get away with a simple `@tool`?
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
}
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
return agent
In your langgraph.json make sure to specify make_graph as your graph entrypoint:
{
"dependencies": ["."],
"graphs": {
"agent": "./graph.py:make_graph"
}
}Tool execution request passed to MCP tool call interceptors.
Protocol for tool call interceptors using handler callback pattern.
Protocol for creating httpx.AsyncClient instances for MCP connections.
Configuration for stdio transport connections to MCP servers.
Configuration for Server-Sent Events (SSE) transport connections to MCP.
Connection configuration for Streamable HTTP transport.
Configuration for WebSocket transport connections to MCP servers.
Client for connecting to multiple MCP servers.
LangChain MCP client callback context.
Light wrapper around the mcp.client.session.LoggingFnT.
Light wrapper around the mcp.shared.session.ProgressFnT.
Light wrapper around the mcp.client.session.ElicitationFnT.
Callbacks for the LangChain MCP client.
Artifact returned from MCP tool calls.
Create a new session to an MCP server.
Convert an MCP prompt message to a LangChain message.
Load MCP prompt and convert to LangChain messages.
Convert an MCP resource content to a LangChain Blob.
Fetch a single MCP resource and convert it to LangChain Blob objects.
Load MCP resources and convert them to LangChain Blob objects.
Convert an MCP tool to a LangChain tool.
Load all available MCP tools and convert them to LangChain tools.
Convert LangChain tool to FastMCP tool.
LangChain MCP Adapters - Connect MCP servers with LangChain applications.
Interceptor interfaces and types for MCP client tool call lifecycle management.
Session management for different MCP transport types.
Prompts adapter for converting MCP prompts to LangChain messages.
Client for connecting to multiple MCP servers and loading LC tools/resources.
Types for callbacks.
Resources adapter for converting MCP resources to LangChain Blob objects.
Tools adapter for converting MCP tools to LangChain tools.