# Python API Reference

API reference documentation for LangChain Python packages.

## Packages

- [LangChain](/python/langchain)
  # 🦜️🔗 LangChain

[![PyPI - Version](https://img.shields.io/pypi/v/langchain?label=%20)](https://pypi.org/project/langchain/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain)](https://pypistats.org/packages/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

To help you ship LangChain apps to production faster, check out [LangSmith](https://www.langchain.com/langsmith).
[LangSmith](https://www.langchain.com/langsmith) is a unified developer platform for building, testing, and monitoring LLM applications.

## Quick Install

```bash
pip install langchain
```

## 🤔 What is this?

LangChain is the easiest way to start building agents and applications powered by LLMs. With under 10 lines of code, you can connect to OpenAI, Anthropic, Google, and [more](https://docs.langchain.com/oss/python/integrations/providers/overview). LangChain provides a pre-built agent architecture and model integrations to help you get started quickly and seamlessly incorporate LLMs into your agents and applications.

We recommend you use LangChain if you want to quickly build agents and autonomous applications. Use [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), our low-level agent orchestration framework and runtime, when you have more advanced needs that require a combination of deterministic and agentic workflows, heavy customization, and carefully controlled latency.

LangChain [agents](https://docs.langchain.com/oss/python/langchain/agents) are built on top of LangGraph in order to provide durable execution, streaming, human-in-the-loop, persistence, and more. (You do not need to know LangGraph for basic LangChain agent usage.)

## 📖 Documentation

For full documentation, see the [API reference](https://reference.langchain.com/python/langchain/langchain/). For conceptual guides, tutorials, and examples on using LangChain, see the [LangChain Docs](https://docs.langchain.com/oss/python/langchain/overview). You can also chat with the docs using [Chat LangChain](https://chat.langchain.com).

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
- [LangChain Core](/python/langchain-core)
  # 🦜🍎️ LangChain Core

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-core?label=%20)](https://pypi.org/project/langchain-core/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-core)](https://pypistats.org/packages/langchain-core)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

To help you ship LangChain apps to production faster, check out [LangSmith](https://www.langchain.com/langsmith).
[LangSmith](https://www.langchain.com/langsmith) is a unified developer platform for building, testing, and monitoring LLM applications.

## Quick Install

```bash
pip install langchain-core
```

## 🤔 What is this?

LangChain Core contains the base abstractions that power the LangChain ecosystem.

These abstractions are designed to be as modular and simple as possible.

The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.

## ⛰️ Why build on top of LangChain Core?

The LangChain ecosystem is built on top of `langchain-core`. Some of the benefits:

- **Modularity**: We've designed Core around abstractions that are independent of each other, and not tied to any specific model provider.
- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- **Battle-tested**: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.

## 📖 Documentation

For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/). For conceptual guides, tutorials, and examples on using LangChain, see the [LangChain Docs](https://docs.langchain.com/oss/python/langchain/overview). You can also chat with the docs using [Chat LangChain](https://chat.langchain.com).

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
- [MCP Adapters](/python/langchain-mcp-adapters)
  # LangChain MCP Adapters

This library provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph).

![MCP](https://raw.githubusercontent.com/langchain-ai/langchain-mcp-adapters/141be26f4b17e702e11568c598587aebd257500c/./static/img/mcp.png)

> [!note]
> A JavaScript/TypeScript version of this library is also available at [langchainjs](https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-mcp-adapters/).

## Features

- 🛠️ Convert MCP tools into [LangChain tools](https://python.langchain.com/docs/concepts/tools/) that can be used with [LangGraph](https://github.com/langchain-ai/langgraph) agents
- 📦 A client implementation that allows you to connect to multiple MCP servers and load tools from them

## Installation

```bash
pip install langchain-mcp-adapters
```

## Quickstart

Here is a simple example of using the MCP tools with a LangGraph agent.

```bash
pip install langchain-mcp-adapters langgraph "langchain[openai]"

export OPENAI_API_KEY=<your_api_key>
```

### Server

First, let's create an MCP server that can add and multiply numbers.

```python
# math_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@mcp.tool()
def multiply(a: int, b: int) -> int:
    """Multiply two numbers"""
    return a * b

if __name__ == "__main__":
    mcp.run(transport="stdio")
```

### Client

```python
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

from langchain_mcp_adapters.tools import load_mcp_tools
from langchain.agents import create_agent

server_params = StdioServerParameters(
    command="python",
    # Make sure to update to the full absolute path to your math_server.py file
    args=["/path/to/math_server.py"],
)

async with stdio_client(server_params) as (read, write):
    async with ClientSession(read, write) as session:
        # Initialize the connection
        await session.initialize()

        # Get tools
        tools = await load_mcp_tools(session)

        # Create and run the agent
        agent = create_agent("openai:gpt-4.1", tools)
        agent_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
```

## Multiple MCP Servers

The library also allows you to connect to multiple MCP servers and load tools from them:

### Server

```python
# math_server.py
...

# weather_server.py
from typing import List
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Weather")

@mcp.tool()
async def get_weather(location: str) -> str:
    """Get weather for location."""
    return "It's always sunny in New York"

if __name__ == "__main__":
    mcp.run(transport="http")
```

```bash
python weather_server.py
```

### Client

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent

client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            # Make sure to update to the full absolute path to your math_server.py file
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        },
        "weather": {
            # Make sure you start your weather server on port 8000
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
```

> [!note]
> Example above will start a new MCP `ClientSession` for each tool invocation. If you would like to explicitly start a session for a given server, you can do:
>
> ```python
> from langchain_mcp_adapters.tools import load_mcp_tools
>
> client = MultiServerMCPClient({...})
> async with client.session("math") as session:
>     tools = await load_mcp_tools(session)
> ```

## Streamable HTTP

MCP now supports [streamable HTTP](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) transport.

To start an [example](https://github.com/langchain-ai/langchain-mcp-adapters/tree/141be26f4b17e702e11568c598587aebd257500c/examples/servers/streamable-http-stateless/) streamable HTTP server, run the following:

```bash
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
```

Alternatively, you can use FastMCP directly (as in the examples above).

To use it with Python MCP SDK `streamablehttp_client`:

```python
# Use server from examples/servers/streamable-http-stateless/

from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client

from langchain.agents import create_agent
from langchain_mcp_adapters.tools import load_mcp_tools

async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
    async with ClientSession(read, write) as session:
        # Initialize the connection
        await session.initialize()

        # Get tools
        tools = await load_mcp_tools(session)
        agent = create_agent("openai:gpt-4.1", tools)
        math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
```

Use it with `MultiServerMCPClient`:

```python
# Use server from examples/servers/streamable-http-stateless/
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent

client = MultiServerMCPClient(
    {
        "math": {
            "transport": "http",
            "url": "http://localhost:3000/mcp"
        },
    }
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
```

## Passing runtime headers

When connecting to MCP servers, you can include custom headers (e.g., for authentication or tracing) using the `headers` field in the connection configuration. This is supported for the following transports:

- `sse`
- `http` (or `streamable_http`)

### Example: passing headers with `MultiServerMCPClient`

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent

client = MultiServerMCPClient(
    {
        "weather": {
            "transport": "http",
            "url": "http://localhost:8000/mcp",
            "headers": {
                "Authorization": "Bearer YOUR_TOKEN",
                "X-Custom-Header": "custom-value"
            },
        }
    }
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
```

> Only `sse` and `http` transports support runtime headers. These headers are passed with every HTTP request to the MCP server.

## Using with LangGraph StateGraph

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition

from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-4.1")

client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            # Make sure to update to the full absolute path to your math_server.py file
            "args": ["./examples/math_server.py"],
            "transport": "stdio",
        },
        "weather": {
            # make sure you start your weather server on port 8000
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)
tools = await client.get_tools()

def call_model(state: MessagesState):
    response = model.bind_tools(tools).invoke(state["messages"])
    return {"messages": response}

builder = StateGraph(MessagesState)
builder.add_node(call_model)
builder.add_node(ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges(
    "call_model",
    tools_condition,
)
builder.add_edge("tools", "call_model")
graph = builder.compile()
math_response = await graph.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await graph.ainvoke({"messages": "what is the weather in nyc?"})
```

## Using with LangGraph API Server

> [!TIP]
> Check out [this guide](https://langchain-ai.github.io/langgraph/tutorials/langgraph-platform/local-server/) on getting started with LangGraph API server.

If you want to run a LangGraph agent that uses MCP tools in a LangGraph API server, you can use the following setup:

```python
# graph.py
from contextlib import asynccontextmanager
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent

async def make_graph():
    client = MultiServerMCPClient(
        {
            "weather": {
                # make sure you start your weather server on port 8000
                "url": "http://localhost:8000/mcp",
                "transport": "http",
            },
            # ATTENTION: MCP's stdio transport was designed primarily to support applications running on a user's machine.
            # Before using stdio in a web server context, evaluate whether there's a more appropriate solution.
            # For example, do you actually need MCP? or can you get away with a simple `@tool`?
            "math": {
                "command": "python",
                # Make sure to update to the full absolute path to your math_server.py file
                "args": ["/path/to/math_server.py"],
                "transport": "stdio",
            },
        }
    )
    tools = await client.get_tools()
    agent = create_agent("openai:gpt-4.1", tools)
    return agent
```

In your [`langgraph.json`](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#configuration-file) make sure to specify `make_graph` as your graph entrypoint:

```json
{
  "dependencies": ["."],
  "graphs": {
    "agent": "./graph.py:make_graph"
  }
}
```
- [LangChain Classic](/python/langchain-classic)
  # 🦜️🔗 LangChain Classic

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-classic?label=%20)](https://pypi.org/project/langchain-classic/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-classic)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-classic)](https://pypistats.org/packages/langchain-classic)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

To help you ship LangChain apps to production faster, check out [LangSmith](https://www.langchain.com/langsmith).
[LangSmith](https://www.langchain.com/langsmith) is a unified developer platform for building, testing, and monitoring LLM applications.

## Quick Install

```bash
pip install langchain-classic
```

## 🤔 What is this?

Legacy chains, `langchain-community` re-exports, indexing API, deprecated functionality, and more.

In most cases, you should be using the main [`langchain`](https://pypi.org/project/langchain/) package.

## 📖 Documentation

For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_classic). For conceptual guides, tutorials, and examples on using LangChain, see the [LangChain Docs](https://docs.langchain.com/oss/python/langchain/overview).

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
- [Standard Tests](/python/langchain-tests)
  # 🦜️🔗 langchain-tests

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-tests?label=%20)](https://pypi.org/project/langchain-tests/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-tests)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-tests)](https://pypistats.org/packages/langchain-tests)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

## Quick Install

```bash
pip install langchain-tests
```

## 🤔 What is this?

This is a testing library for LangChain integrations. It contains the base classes for a standard set of tests.

## 📖 Documentation

For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_tests/).

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

We encourage pinning your version to a specific version in order to avoid breaking your CI when we publish new tests. We recommend upgrading to the latest version periodically to make sure you have the latest tests.

Not pinning your version will ensure you always have the latest tests, but it may also break your CI if we introduce tests that your integration doesn't pass.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).

## Usage

To add standard tests to an integration package (e.g., for a chat model), you need to create

1. A unit test class that inherits from `ChatModelUnitTests`
2. An integration test class that inherits from `ChatModelIntegrationTests`

`tests/unit_tests/test_standard.py`:

```python
"""Standard LangChain interface tests"""

from typing import Type

import pytest
from langchain_core.language_models import BaseChatModel
from langchain_tests.unit_tests import ChatModelUnitTests

from langchain_parrot_chain import ChatParrotChain


class TestParrotChainStandard(ChatModelUnitTests):
    @pytest.fixture
    def chat_model_class(self) -> Type[BaseChatModel]:
        return ChatParrotChain
```

`tests/integration_tests/test_standard.py`:

```python
"""Standard LangChain interface tests"""

from typing import Type

import pytest
from langchain_core.language_models import BaseChatModel
from langchain_tests.integration_tests import ChatModelIntegrationTests

from langchain_parrot_chain import ChatParrotChain


class TestParrotChainStandard(ChatModelIntegrationTests):
    @pytest.fixture
    def chat_model_class(self) -> Type[BaseChatModel]:
        return ChatParrotChain
```

## Reference

The following fixtures are configurable in the test classes. Anything not marked
as required is optional.

- `chat_model_class` (required): The class of the chat model to be tested
- `chat_model_params`: The keyword arguments to pass to the chat model constructor
- `chat_model_has_tool_calling`: Whether the chat model can call tools. By default, this is set to `hasattr(chat_model_class, 'bind_tools)`
- `chat_model_has_structured_output`: Whether the chat model can structured output. By default, this is set to `hasattr(chat_model_class, 'with_structured_output')`
- [Text Splitters](/python/langchain-text-splitters)
  # 🦜✂️ LangChain Text Splitters

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-text-splitters?label=%20)](https://pypi.org/project/langchain-text-splitters/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-text-splitters)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-text-splitters)](https://pypistats.org/packages/langchain-text-splitters)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

## Quick Install

```bash
pip install langchain-text-splitters
```

## 🤔 What is this?

LangChain Text Splitters contains utilities for splitting into chunks a wide variety of text documents.

## 📖 Documentation

For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_text_splitters/).

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

We encourage pinning your version to a specific version in order to avoid breaking your CI when we publish new tests. We recommend upgrading to the latest version periodically to make sure you have the latest tests.

Not pinning your version will ensure you always have the latest tests, but it may also break your CI if we introduce tests that your integration doesn't pass.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
- [LangGraph](/python/langgraph)
  <picture class="github-only">
  <source media="(prefers-color-scheme: light)" srcset="https://langchain-ai.github.io/langgraph/static/wordmark_dark.svg">
  <source media="(prefers-color-scheme: dark)" srcset="https://langchain-ai.github.io/langgraph/static/wordmark_light.svg">
  <img alt="LangGraph Logo" src="https://langchain-ai.github.io/langgraph/static/wordmark_dark.svg" width="80%">
</picture>

<div>
<br>
</div>

[![Version](https://img.shields.io/pypi/v/langgraph.svg)](https://pypi.org/project/langgraph/)
[![Downloads](https://static.pepy.tech/badge/langgraph/month)](https://pepy.tech/project/langgraph)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langgraph)](https://github.com/langchain-ai/langgraph/issues)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://docs.langchain.com/oss/python/langgraph/overview)

Trusted by companies shaping the future of agents – including Klarna, Replit, Elastic, and more – LangGraph is a low-level orchestration framework for building, managing, and deploying long-running, stateful agents.

## Get started

Install LangGraph:

```
pip install -U langgraph
```

Create a simple workflow:

```python
from langgraph.graph import START, StateGraph
from typing_extensions import TypedDict


class State(TypedDict):
    text: str


def node_a(state: State) -> dict:
    return {"text": state["text"] + "a"}


def node_b(state: State) -> dict:
    return {"text": state["text"] + "b"}


graph = StateGraph(State)
graph.add_node("node_a", node_a)
graph.add_node("node_b", node_b)
graph.add_edge(START, "node_a")
graph.add_edge("node_a", "node_b")

print(graph.compile().invoke({"text": ""}))
# {'text': 'ab'}
```

Get started with the [LangGraph Quickstart](https://docs.langchain.com/oss/python/langgraph/quickstart).

To quickly build agents with LangChain's `create_agent` (built on LangGraph), see the [LangChain Agents documentation](https://docs.langchain.com/oss/python/langchain/agents).

## Core benefits

LangGraph provides low-level supporting infrastructure for *any* long-running, stateful workflow or agent. LangGraph does not abstract prompts or architecture, and provides the following central benefits:

- [Durable execution](https://docs.langchain.com/oss/python/langgraph/durable-execution): Build agents that persist through failures and can run for extended periods, automatically resuming from exactly where they left off.
- [Human-in-the-loop](https://docs.langchain.com/oss/python/langgraph/interrupts): Seamlessly incorporate human oversight by inspecting and modifying agent state at any point during execution.
- [Comprehensive memory](https://docs.langchain.com/oss/python/langgraph/memory): Create truly stateful agents with both short-term working memory for ongoing reasoning and long-term persistent memory across sessions.
- [Debugging with LangSmith](http://www.langchain.com/langsmith): Gain deep visibility into complex agent behavior with visualization tools that trace execution paths, capture state transitions, and provide detailed runtime metrics.
- [Production-ready deployment](https://docs.langchain.com/langsmith/app-development): Deploy sophisticated agent systems confidently with scalable infrastructure designed to handle the unique challenges of stateful, long-running workflows.

## LangGraph’s ecosystem

While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with:

- [LangSmith](http://www.langchain.com/langsmith) — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://docs.langchain.com/oss/python/langgraph/studio).
- [LangChain](https://docs.langchain.com/oss/python/langchain/overview) – Provides integrations and composable components to streamline LLM application development.

> [!NOTE]
> Looking for the JS version of LangGraph? See the [JS repo](https://github.com/langchain-ai/langgraphjs) and the [JS docs](https://docs.langchain.com/oss/javascript/langgraph/overview).

## Additional resources

- [Guides](https://docs.langchain.com/oss/python/langgraph/guides): Quick, actionable code snippets for topics such as streaming, adding memory & persistence, and design patterns (e.g. branching, subgraphs, etc.).
- [Reference](https://reference.langchain.com/python/langgraph/): Detailed reference on core classes, methods, how to use the graph and checkpointing APIs, and higher-level prebuilt components.
- [Examples](https://docs.langchain.com/oss/python/langgraph/agentic-rag): Guided examples on getting started with LangGraph.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [LangChain Academy](https://academy.langchain.com/courses/intro-to-langgraph): Learn the basics of LangGraph in our free, structured course.
- [Case studies](https://www.langchain.com/built-with-langgraph): Hear how industry leaders use LangGraph to ship AI applications at scale.

## Acknowledgements

LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
- [LangGraph CLI](/python/langgraph-cli)
  # LangGraph CLI

The official command-line interface for LangGraph, providing tools to create, develop, and deploy LangGraph applications.

## Installation

Install via pip:
```bash
pip install langgraph-cli
```

For development mode with hot reloading:
```bash
pip install "langgraph-cli[inmem]"
```

## Commands

### `langgraph new` 🌱
Create a new LangGraph project from a template
```bash
langgraph new [PATH] --template TEMPLATE_NAME
```

### `langgraph dev` 🏃‍♀️
Run LangGraph API server in development mode with hot reloading
```bash
langgraph dev [OPTIONS]
  --host TEXT                 Host to bind to (default: 127.0.0.1)
  --port INTEGER             Port to bind to (default: 2024)
  --no-reload               Disable auto-reload
  --debug-port INTEGER      Enable remote debugging
  --no-browser             Skip opening browser window
  -c, --config FILE        Config file path (default: langgraph.json)
```

### `langgraph up` 🚀
Launch LangGraph API server in Docker
```bash
langgraph up [OPTIONS]
  -p, --port INTEGER        Port to expose (default: 8123)
  --wait                   Wait for services to start
  --watch                  Restart on file changes
  --verbose               Show detailed logs
  -c, --config FILE       Config file path
  -d, --docker-compose    Additional services file
```

### `langgraph build`
Build a Docker image for your LangGraph application
```bash
langgraph build -t IMAGE_TAG [OPTIONS]
  --platform TEXT          Target platforms (e.g., linux/amd64,linux/arm64)
  --pull / --no-pull      Use latest/local base image
  -c, --config FILE       Config file path
```

### `langgraph dockerfile`
Generate a Dockerfile for custom deployments
```bash
langgraph dockerfile SAVE_PATH [OPTIONS]
  -c, --config FILE       Config file path
```

## Configuration

The CLI uses a `langgraph.json` configuration file with these key settings:

```json
{
  "dependencies": ["langchain_openai", "./your_package"],  // Required: Package dependencies
  "graphs": {
    "my_graph": "./your_package/file.py:graph"            // Required: Graph definitions
  },
  "env": "./.env",                                        // Optional: Environment variables
  "python_version": "3.11",                               // Optional: Python version (3.11/3.12)
  "pip_config_file": "./pip.conf",                        // Optional: pip configuration
  "dockerfile_lines": []                                  // Optional: Additional Dockerfile commands
}
```

See the [full documentation](https://langchain-ai.github.io/langgraph/cloud/reference/cli/) for detailed configuration options.

## Development

To develop the CLI itself:

1. Clone the repository
2. Navigate to the CLI directory: `cd libs/cli`
3. Install development dependencies: `uv pip install`
4. Make your changes to the CLI code
5. Test your changes:
   ```bash
   # Run CLI commands directly
   uv run langgraph --help
   
   # Or use the examples
   cd examples
   uv pip install
   uv run langgraph dev  # or other commands
   ```

## License

This project is licensed under the terms specified in the repository's LICENSE file.
- [LangGraph SDK](/python/langgraph-sdk)
  # LangGraph Python SDK

This repository contains the Python SDK for interacting with the LangSmith Deployment REST API.

## Quick Start

To get started with the Python SDK, [install the package](https://pypi.org/project/langgraph-sdk/)

```bash
pip install -U langgraph-sdk
```

You will need a running LangGraph API server. If you're running a server locally using `langgraph-cli`, SDK will automatically point at `http://localhost:8123`, otherwise
you would need to specify the server URL when creating a client.

```python
from langgraph_sdk import get_client

# If you're using a remote server, initialize the client with `get_client(url=REMOTE_URL)`
client = get_client()

# List all assistants
assistants = await client.assistants.search()

# We auto-create an assistant for each graph you register in config.
agent = assistants[0]

# Start a new thread
thread = await client.threads.create()

# Start a streaming run
input = {"messages": [{"role": "human", "content": "what's the weather in la"}]}
async for chunk in client.runs.stream(thread['thread_id'], agent['assistant_id'], input=input):
    print(chunk)
```
- [LangGraph Supervisor](/python/langgraph-supervisor)
  # 🤖 LangGraph Multi-Agent Supervisor

> **Note**: We now recommend using the **supervisor pattern directly via tools** rather than this library for most use cases. The tool-calling approach gives you more control over context engineering and is the recommended pattern in the [LangChain multi-agent guide](https://docs.langchain.com/oss/python/langchain/multi-agent). See our [supervisor tutorial](https://docs.langchain.com/oss/python/langchain/supervisor) for a step-by-step guide. We're making this library compatible with LangChain 1.0 to help users upgrade their existing code. If you find this library solves a problem that can't be easily addressed with the manual supervisor pattern, we'd love to hear about your use case!

A Python library for creating hierarchical multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraph). Hierarchical systems are a type of [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent) architecture where specialized agents are coordinated by a central **supervisor** agent. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements.

## Features

- 🤖 **Create a supervisor agent** to orchestrate multiple specialized agents
- 🛠️ **Tool-based agent handoff mechanism** for communication between agents
- 📝 **Flexible message history management** for conversation control

This library is built on top of [LangGraph](https://github.com/langchain-ai/langgraph), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)

## Installation

```bash
pip install langgraph-supervisor
```

> [!Note]
> LangGraph Supervisor requires Python >= 3.10

## Quickstart

Here's a simple example of a supervisor managing two specialized agents:

![Supervisor Architecture](https://raw.githubusercontent.com/langchain-ai/langgraph-supervisor-py/731739c6d0407255523e3ff20c9b9044a10d8a2f/./static/img/supervisor.png)

```bash
pip install langgraph-supervisor langchain-openai

export OPENAI_API_KEY=<your_api_key>
```

```python
from langchain_openai import ChatOpenAI

from langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agent

model = ChatOpenAI(model="gpt-4o")

# Create specialized agents

def add(a: float, b: float) -> float:
    """Add two numbers."""
    return a + b

def multiply(a: float, b: float) -> float:
    """Multiply two numbers."""
    return a * b

def web_search(query: str) -> str:
    """Search the web for information."""
    return (
        "Here are the headcounts for each of the FAANG companies in 2024:\n"
        "1. **Facebook (Meta)**: 67,317 employees.\n"
        "2. **Apple**: 164,000 employees.\n"
        "3. **Amazon**: 1,551,000 employees.\n"
        "4. **Netflix**: 14,000 employees.\n"
        "5. **Google (Alphabet)**: 181,269 employees."
    )

math_agent = create_react_agent(
    model=model,
    tools=[add, multiply],
    name="math_expert",
    prompt="You are a math expert. Always use one tool at a time."
)

research_agent = create_react_agent(
    model=model,
    tools=[web_search],
    name="research_expert",
    prompt="You are a world class researcher with access to web search. Do not do any math."
)

# Create supervisor workflow
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    prompt=(
        "You are a team supervisor managing a research expert and a math expert. "
        "For current events, use research_agent. "
        "For math problems, use math_agent."
    )
)

# Compile and run
app = workflow.compile()
result = app.invoke({
    "messages": [
        {
            "role": "user",
            "content": "what's the combined headcount of the FAANG companies in 2024?"
        }
    ]
})
```

> [!TIP]
> For developing, debugging, and deploying AI agents and LLM applications, see [LangSmith](https://docs.langchain.com/langsmith/home).

## Message History Management

You can control how messages from worker agents are added to the overall conversation history of the multi-agent system:

Include full message history from an agent:

![Full History](https://raw.githubusercontent.com/langchain-ai/langgraph-supervisor-py/731739c6d0407255523e3ff20c9b9044a10d8a2f/./static/img/full_history.png)

```python
workflow = create_supervisor(
    agents=[agent1, agent2],
    output_mode="full_history"
)
```

Include only the final agent response:

![Last Message](https://raw.githubusercontent.com/langchain-ai/langgraph-supervisor-py/731739c6d0407255523e3ff20c9b9044a10d8a2f/./static/img/last_message.png)

```python
workflow = create_supervisor(
    agents=[agent1, agent2],
    output_mode="last_message"
)
```

## Multi-level Hierarchies

You can create multi-level hierarchical systems by creating a supervisor that manages multiple supervisors.

```python
research_team = create_supervisor(
    [research_agent, math_agent],
    model=model,
    supervisor_name="research_supervisor"
).compile(name="research_team")

writing_team = create_supervisor(
    [writing_agent, publishing_agent],
    model=model,
    supervisor_name="writing_supervisor"
).compile(name="writing_team")

top_level_supervisor = create_supervisor(
    [research_team, writing_team],
    model=model,
    supervisor_name="top_level_supervisor"
).compile(name="top_level_supervisor")
```

## Adding Memory

You can add [short-term](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraph/concepts/memory/) to your supervisor multi-agent system. Since `create_supervisor()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraph/reference/checkpoints/#langgraph.checkpoint.base.BaseCheckpointSaver) or a [store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) instance to the `.compile()` method:

```python
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStore

checkpointer = InMemorySaver()
store = InMemoryStore()

model = ...
research_agent = ...
math_agent = ...

workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    prompt="You are a team supervisor managing a research expert and a math expert.",
)

# Compile with checkpointer/store
app = workflow.compile(
    checkpointer=checkpointer,
    store=store
)
```

## How to customize

### Customizing handoff tools

By default, the supervisor uses handoff tools created with the prebuilt `create_handoff_tool`. You can also create your own, custom handoff tools. Here are some ideas on how you can modify the default implementation:

* change tool name and/or description
* add tool call arguments for the LLM to populate, for example a task description for the next agent
* change what data is passed to the subagent as part of the handoff: by default `create_handoff_tool` passes **full** message history (all of the messages generated in the supervisor up to this point), as well as a tool message indicating successful handoff.

Here is an example of how to pass customized handoff tools to `create_supervisor`:

```python
from langgraph_supervisor import create_handoff_tool
workflow = create_supervisor(
    [research_agent, math_agent],
    tools=[
        create_handoff_tool(agent_name="math_expert", name="assign_to_math_expert", description="Assign task to math expert"),
        create_handoff_tool(agent_name="research_expert", name="assign_to_research_expert", description="Assign task to research expert")
    ],
    model=model,
)
```

You can also control whether the handoff tool invocation messages are added to the state. By default, they are added (`add_handoff_messages=True`), but you can disable this if you want a more concise history:

```python
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    add_handoff_messages=False
)
```

Additionally, you can customize the prefix used for the automatically generated handoff tools:

```python
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    handoff_tool_prefix="delegate_to"
)
# This will create tools named: delegate_to_research_expert, delegate_to_math_expert
```

Here is an example of what a custom handoff tool might look like:

```python
from typing import Annotated

from langchain_core.tools import tool, BaseTool, InjectedToolCallId
from langchain_core.messages import ToolMessage
from langgraph.types import Command
from langgraph.prebuilt import InjectedState
from langgraph_supervisor.handoff import METADATA_KEY_HANDOFF_DESTINATION

def create_custom_handoff_tool(*, agent_name: str, name: str | None, description: str | None) -> BaseTool:

    @tool(name, description=description)
    def handoff_to_agent(
        # you can add additional tool call arguments for the LLM to populate
        # for example, you can ask the LLM to populate a task description for the next agent
        task_description: Annotated[str, "Detailed description of what the next agent should do, including all of the relevant context."],
        # you can inject the state of the agent that is calling the tool
        state: Annotated[dict, InjectedState],
        tool_call_id: Annotated[str, InjectedToolCallId],
    ):
        tool_message = ToolMessage(
            content=f"Successfully transferred to {agent_name}",
            name=name,
            tool_call_id=tool_call_id,
        )
        messages = state["messages"]
        return Command(
            goto=agent_name,
            graph=Command.PARENT,
            # NOTE: this is a state update that will be applied to the swarm multi-agent graph (i.e., the PARENT graph)
            update={
                "messages": messages + [tool_message],
                "active_agent": agent_name,
                # optionally pass the task description to the next agent
                # NOTE: individual agents would need to have `task_description` in their state schema
                # and would need to implement logic for how to consume it
                "task_description": task_description,
            },
        )

    handoff_to_agent.metadata = {METADATA_KEY_HANDOFF_DESTINATION: agent_name}
    return handoff_to_agent
```

### Message Forwarding

You can equip the supervisor with a tool to directly forward the last message received from a worker agent straight to the final output of the graph using `create_forward_message_tool`. This is useful when the supervisor determines that the worker's response is sufficient and doesn't require further processing or summarization by the supervisor itself. It saves tokens for the supervisor and avoids potential misrepresentation of the worker's response through paraphrasing.

```python
from langgraph_supervisor.handoff import create_forward_message_tool

# Assume research_agent and math_agent are defined as before

forwarding_tool = create_forward_message_tool("supervisor") # The argument is the name to assign to the resulting forwarded message
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    # Pass the forwarding tool along with any other custom or default handoff tools
    tools=[forwarding_tool]
)
```

This creates a tool named `forward_message` that the supervisor can invoke. The tool expects an argument `from_agent` specifying which agent's last message should be forwarded directly to the output.

## Using Functional API 

Here's a simple example of a supervisor managing two specialized agentic workflows created using Functional API:

```bash
pip install langgraph-supervisor langchain-openai

export OPENAI_API_KEY=<your_api_key>
```

```python
from langgraph.prebuilt import create_react_agent
from langgraph_supervisor import create_supervisor

from langchain_openai import ChatOpenAI

from langgraph.func import entrypoint, task
from langgraph.graph import add_messages

model = ChatOpenAI(model="gpt-4o")

# Create specialized agents

# Functional API - Agent 1 (Joke Generator)
@task
def generate_joke(messages):
    """First LLM call to generate initial joke"""
    system_message = {
        "role": "system", 
        "content": "Write a short joke"
    }
    msg = model.invoke(
        [system_message] + messages
    )
    return msg

@entrypoint()
def joke_agent(state):
    joke = generate_joke(state['messages']).result()
    messages = add_messages(state["messages"], [joke])
    return {"messages": messages}

joke_agent.name = "joke_agent"

# Graph API - Agent 2 (Research Expert)
def web_search(query: str) -> str:
    """Search the web for information."""
    return (
        "Here are the headcounts for each of the FAANG companies in 2024:\n"
        "1. **Facebook (Meta)**: 67,317 employees.\n"
        "2. **Apple**: 164,000 employees.\n"
        "3. **Amazon**: 1,551,000 employees.\n"
        "4. **Netflix**: 14,000 employees.\n"
        "5. **Google (Alphabet)**: 181,269 employees."
    )

research_agent = create_react_agent(
    model=model,
    tools=[web_search],
    name="research_expert",
    prompt="You are a world class researcher with access to web search. Do not do any math."
)

# Create supervisor workflow
workflow = create_supervisor(
    [research_agent, joke_agent],
    model=model,
    prompt=(
        "You are a team supervisor managing a research expert and a joke expert. "
        "For current events, use research_agent. "
        "For any jokes, use joke_agent."
    )
)

# Compile and run
app = workflow.compile()
result = app.invoke({
    "messages": [
        {
            "role": "user",
            "content": "Share a joke to relax and start vibe coding for my next project idea."
        }
    ]
})

for m in result["messages"]:
    m.pretty_print()
```
- [LangGraph Swarm](/python/langgraph-swarm)
  # 🤖 LangGraph Multi-Agent Swarm

A Python library for creating swarm-style multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraph). A swarm is a type of [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent) architecture where agents dynamically hand off control to one another based on their specializations. The system remembers which agent was last active, ensuring that on subsequent interactions, the conversation resumes with that agent.

![Swarm](https://raw.githubusercontent.com/langchain-ai/langgraph-swarm-py/80df5fee466211757d8dcc679892cb1c80949522/./static/img/swarm.png)

## Features

- 🤖 **Multi-agent collaboration** - Enable specialized agents to work together and hand off context to each other
- 🛠️ **Customizable handoff tools** - Built-in tools for communication between agents

This library is built on top of [LangGraph](https://github.com/langchain-ai/langgraph), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)

## Installation

```bash
pip install langgraph-swarm
```

## Quickstart

```bash
pip install langgraph-swarm langchain-openai

export OPENAI_API_KEY=<your_api_key>
```

```python
from langchain_openai import ChatOpenAI

from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents import create_agent
from langgraph_swarm import create_handoff_tool, create_swarm

model = ChatOpenAI(model="gpt-4o")

def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

alice = create_agent(
    model,
    tools=[
        add,
        create_handoff_tool(
            agent_name="Bob",
            description="Transfer to Bob",
        ),
    ],
    system_prompt="You are Alice, an addition expert.",
    name="Alice",
)

bob = create_agent(
    model,
    tools=[
        create_handoff_tool(
            agent_name="Alice",
            description="Transfer to Alice, she can help with math",
        ),
    ],
    system_prompt="You are Bob, you speak like a pirate.",
    name="Bob",
)

checkpointer = InMemorySaver()
workflow = create_swarm(
    [alice, bob],
    default_active_agent="Alice"
)
app = workflow.compile(checkpointer=checkpointer)

config = {"configurable": {"thread_id": "1"}}
turn_1 = app.invoke(
    {"messages": [{"role": "user", "content": "i'd like to speak to Bob"}]},
    config,
)
print(turn_1)
turn_2 = app.invoke(
    {"messages": [{"role": "user", "content": "what's 5 + 7?"}]},
    config,
)
print(turn_2)
```

> [!TIP]
> For developing, debugging, and deploying AI agents and LLM applications, see [LangSmith](https://docs.langchain.com/langsmith/home).

## Memory

You can add [short-term](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraph/concepts/memory/) to your swarm multi-agent system. Since `create_swarm()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraph/reference/checkpoints/#langgraph.checkpoint.base.BaseCheckpointSaver) or a [store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) instance to the `.compile()` method:

```python
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStore

# short-term memory
checkpointer = InMemorySaver()
# long-term memory
store = InMemoryStore()

model = ...
alice = ...
bob = ...

workflow = create_swarm(
    [alice, bob],
    default_active_agent="Alice"
)

# Compile with checkpointer/store
app = workflow.compile(
    checkpointer=checkpointer,
    store=store
)
```

> [!IMPORTANT]
> Adding [short-term memory](https://langchain-ai.github.io/langgraph/concepts/persistence/) is crucial for maintaining conversation state across multiple interactions. Without it, the swarm would "forget" which agent was last active and lose the conversation history. Make sure to always compile the swarm with a checkpointer if you plan to use it in multi-turn conversations; e.g., `workflow.compile(checkpointer=checkpointer)`.

## How to customize

You can customize multi-agent swarm by changing either the [handoff tools](#customizing-handoff-tools) implementation or the [agent implementation](#customizing-agent-implementation).

### Customizing handoff tools

By default, the agents in the swarm are assumed to use handoff tools created with the prebuilt `create_handoff_tool`. You can also create your own, custom handoff tools. Here are some ideas on how you can modify the default implementation:

- change tool name and/or description
- add tool call arguments for the LLM to populate, for example a task description for the next agent
- change what data is passed to the next agent as part of the handoff: by default `create_handoff_tool` passes **full** message history (all of the messages generated in the swarm up to this point), as well as a tool message indicating successful handoff.

Here is an example of what a custom handoff tool might look like:

```python
from typing import Annotated

from langchain.tools import tool, BaseTool, InjectedToolCallId
from langchain.messages import ToolMessage
from langgraph.types import Command
from langgraph.prebuilt import InjectedState

def create_custom_handoff_tool(*, agent_name: str, name: str | None, description: str | None) -> BaseTool:

    @tool(name, description=description)
    def handoff_to_agent(
        # you can add additional tool call arguments for the LLM to populate
        # for example, you can ask the LLM to populate a task description for the next agent
        task_description: Annotated[str, "Detailed description of what the next agent should do, including all of the relevant context."],
        # you can inject the state of the agent that is calling the tool
        state: Annotated[dict, InjectedState],
        tool_call_id: Annotated[str, InjectedToolCallId],
    ):
        tool_message = ToolMessage(
            content=f"Successfully transferred to {agent_name}",
            name=name,
            tool_call_id=tool_call_id,
        )
        # you can use a different messages state key here, if your agent uses a different schema
        # e.g., "alice_messages" instead of "messages"
        messages = state["messages"]
        return Command(
            goto=agent_name,
            graph=Command.PARENT,
            # NOTE: this is a state update that will be applied to the swarm multi-agent graph (i.e., the PARENT graph)
            update={
                "messages": messages + [tool_message],
                "active_agent": agent_name,
                # optionally pass the task description to the next agent
                "task_description": task_description,
            },
        )

    return handoff_to_agent
```

> [!IMPORTANT]
> If you are implementing custom handoff tools that return `Command`, you need to ensure that:  
>  (1) your agent has a tool-calling node that can handle tools returning `Command` (like LangGraph's prebuilt [`ToolNode`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.ToolNode))  
>  (2) both the swarm graph and the next agent graph have the [state schema](https://langchain-ai.github.io/langgraph/concepts/low_level#schema) containing the keys you want to update in `Command.update`

### Customizing agent implementation

By default, individual agents are expected to communicate over a single `messages` key that is shared by all agents and the overall multi-agent swarm graph. This means that messages from **all** of the agents will be combined into a single, shared list of messages. This might not be desirable if you don't want to expose an agent's internal history of messages. To change this, you can customize the agent by taking the following steps:

1.  use custom [state schema](https://langchain-ai.github.io/langgraph/concepts/low_level#schema) with a different key for messages, for example `alice_messages`
1.  write a wrapper that converts the parent graph state to the child agent state and back (see this [how-to](https://langchain-ai.github.io/langgraph/how-tos/subgraph-transform-state/) guide)

```python
from typing_extensions import TypedDict, Annotated

from langchain.messages import AnyMessage
from langgraph.graph import StateGraph, add_messages
from langgraph_swarm import SwarmState

class AliceState(TypedDict):
    alice_messages: Annotated[list[AnyMessage], add_messages]

# see this guide to learn how you can implement a custom tool-calling agent
# https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch/
alice = (
    StateGraph(AliceState)
    .add_node("model", ...)
    .add_node("tools", ...)
    .add_edge(...)
    ...
    .compile()
)

# wrapper calling the agent
def call_alice(state: SwarmState):
    # you can put any input transformation from parent state -> agent state
    # for example, you can invoke "alice" with "task_description" populated by the LLM
    response = alice.invoke({"alice_messages": state["messages"]})
    # you can put any output transformation from agent state -> parent state
    return {"messages": response["alice_messages"]}

def call_bob(state: SwarmState):
    ...
```

Then, you can create the swarm manually in the following way:

```python
from langgraph_swarm import add_active_agent_router

workflow = (
    StateGraph(SwarmState)
    .add_node("Alice", call_alice, destinations=("Bob",))
    .add_node("Bob", call_bob, destinations=("Alice",))
)
# this is the router that enables us to keep track of the last active agent
workflow = add_active_agent_router(
    builder=workflow,
    route_to=["Alice", "Bob"],
    default_active_agent="Alice",
)

# compile the workflow
app = workflow.compile()
```
- [Deep Agents](/python/deepagents)
  # 🧠🤖 Deep Agents

[![PyPI - Version](https://img.shields.io/pypi/v/deepagents?label=%20)](https://pypi.org/project/deepagents/#history)
[![PyPI - License](https://img.shields.io/pypi/l/deepagents)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/deepagents)](https://pypistats.org/packages/deepagents)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

Looking for the JS/TS version? Check out [Deep Agents.js](https://github.com/langchain-ai/deepagentsjs).

To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.

## Quick Install

```bash
pip install deepagents
# or
uv add deepagents
```

## 🤔 What is this?

Using an LLM to call tools in a loop is the simplest form of an agent. This architecture, however, can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.

Applications like "Deep Research", "Manus", and "Claude Code" have gotten around this limitation by implementing a combination of four things: a **planning tool**, **sub agents**, access to a **file system**, and a **detailed prompt**.

`deepagents` is a Python package that implements these in a general purpose way so that you can easily create a Deep Agent for your application. For a full overview and quickstart of Deep Agents, the best resource is our [docs](https://docs.langchain.com/oss/python/deepagents/overview).

**Acknowledgements: This project was primarily inspired by Claude Code, and initially was largely an attempt to see what made Claude Code general purpose, and make it even more so.**

## 📖 Resources

- **[Documentation](https://docs.langchain.com/oss/python/deepagents)** — Full documentation
- **[API Reference](https://reference.langchain.com/python/deepagents/)** — Full SDK reference documentation
- **[Chat LangChain](https://chat.langchain.com)** - Chat interactively with the docs

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
- [Deep Agents CLI](/python/deepagents-cli)
  # 🧠🤖 Deep Agents CLI

[![PyPI - Version](https://img.shields.io/pypi/v/deepagents-cli?label=%20)](https://pypi.org/project/deepagents-cli/#history)
[![PyPI - License](https://img.shields.io/pypi/l/deepagents-cli)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/deepagents-cli)](https://pypistats.org/packages/deepagents-cli)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain)](https://x.com/langchain)

<p align="center">
  <img src="https://raw.githubusercontent.com/langchain-ai/deepagents/main/libs/cli/images/cli.png" alt="Deep Agents CLI" width="600"/>
</p>

## Quick Install

```bash
curl -LsSf https://raw.githubusercontent.com/langchain-ai/deepagents/main/scripts/install.sh | bash
```

```bash
# With model provider extras (OpenAI is included by default)
DEEPAGENTS_EXTRAS="anthropic,groq" curl -LsSf https://raw.githubusercontent.com/langchain-ai/deepagents/main/scripts/install.sh | bash
```

Or install directly with `uv`:

```bash
# Install with chosen model providers (OpenAI is included by default)
uv tool install 'deepagents-cli[anthropic,groq]'
```

Run the CLI:

```bash
deepagents
```

## 🤔 What is this?

Using an LLM to call tools in a loop is the simplest form of an agent. This architecture, however, can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.

Applications like "Deep Research", "Manus", and "Claude Code" have gotten around this limitation by implementing a combination of four things: a **planning tool**, **sub agents**, access to a **file system**, and a **detailed prompt**.

`deepagents` is a Python package that implements these in a general purpose way so that you can easily create a Deep Agent for your application. For a full overview and quickstart of Deep Agents, the best resource is our [docs](https://docs.langchain.com/oss/python/deepagents/overview).

**Acknowledgements: This project was primarily inspired by Claude Code, and initially was largely an attempt to see what made Claude Code general purpose, and make it even more so.**

## 📖 Resources

- **[CLI Documentation](https://docs.langchain.com/oss/python/deepagents/cli/overview)** — Full documentation
- **[CLI Source](https://github.com/langchain-ai/deepagents/tree/main/libs/cli)** — Full source code
- **[Deep Agents SDK](https://github.com/langchain-ai/deepagents)** — The underlying agent harness
- **[Chat LangChain](https://chat.langchain.com)** - Chat interactively with the docs

## 📕 Releases & Versioning

See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.

## 💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
- [Amazon Nova](/python/langchain-amazon-nova)
  # `langchain-amazon-nova`

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-amazon-nova?label=%20)](https://pypi.org/project/langchain-amazon-nova/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-amazon-nova)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-amazon-nova)](https://pypistats.org/packages/langchain-amazon-nova)

!!! note "Reference docs"
    This page contains **reference documentation** for Amazon Nova. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/amazon_nova) for conceptual guides, tutorials, and examples on using Amazon Nova modules.
- [Anthropic](/python/langchain-anthropic)
  # `langchain-anthropic`

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-anthropic?label=%20)](https://pypi.org/project/langchain-anthropic/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-anthropic)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-anthropic)](https://pypistats.org/packages/langchain-anthropic)

## Modules

> **Usage documentation**
> Refer to [the docs](https://docs.langchain.com/oss/python/integrations/providers/anthropic) for a high-level guide on how to use each module. These reference pages contain auto-generated API documentation for each module, focusing on the "what" rather than the "how" or "why" (i.e. no end-to-end tutorials or conceptual overviews).

<div class="grid-cards">
  <a href="/python/integrations/langchain_anthropic/chat-anthropic/" class="grid-card">
    <div class="grid-card-title"><code>ChatAnthropic</code></div>
    <p>Anthropic chat models.</p>
  </a>
  <a href="/python/integrations/langchain_anthropic/anthropic-llm/" class="grid-card">
    <div class="grid-card-title"><code>AnthropicLLM</code></div>
    <p>(Legacy) Anthropic text completion models.</p>
  </a>
  <a href="/python/integrations/langchain_anthropic/middleware/" class="grid-card">
    <div class="grid-card-title">Middleware</div>
    <p>Anthropic-specific middleware for Claude models.</p>
  </a>
</div>
- [AstraDB](/python/langchain-astradb)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-astradb?label=%20)](https://pypi.org/project/langchain-astradb/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-astradb)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-astradb)](https://pypistats.org/packages/langchain-astradb)

!!! note "Reference docs"
    This page contains **reference documentation** for AstraDB. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/astradb) for conceptual guides, tutorials, and examples on using AstraDB.
- [AWS](/python/langchain-aws)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?label=%20)](https://pypi.org/project/langchain-aws/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-aws)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-aws)](https://pypistats.org/packages/langchain-aws)

!!! note
    This package ref has not yet been fully migrated to v1.

!!! note "Reference docs"
    This page contains **reference documentation** for AWS. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/aws) for conceptual guides, tutorials, and examples on using AWS modules.
- [Azure (Microsoft)](/python/langchain-azure-dynamic-sessions)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-azure-dynamic-sessions?label=%20)](https://pypi.org/project/langchain-azure-dynamic-sessions/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-azure-dynamic-sessions)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-azure-dynamic-sessions)](https://pypistats.org/packages/langchain-azure-dynamic-sessions)

!!! note
    This package ref has not yet been migrated to v1. It will be updated pending the release of `langchain-azure-dynamic-sessions` v1.0.

<!-- !!! note "Reference docs"
    This page contains **reference documentation** for `Azure AI`. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/microsoft) for conceptual guides, tutorials, and examples on using `Azure AI`. -->

<!-- ::: langchain_azure_dynamic_sessions -->
- [Cerebras](/python/langchain-cerebras)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?label=%20)](https://pypi.org/project/langchain-cerebras/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-cerebras)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-cerebras)](https://pypistats.org/packages/langchain-cerebras)
- [Chroma](/python/langchain-chroma)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-chroma?label=%20)](https://pypi.org/project/langchain-chroma/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-chroma)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-chroma)](https://pypistats.org/packages/langchain-chroma)

!!! note "Reference docs"
    This page contains **reference documentation** for Chroma. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/chroma) for conceptual guides, tutorials, and examples on using Chroma modules.
- [Cohere](/python/langchain-cohere)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-cohere?label=%20)](https://pypi.org/project/langchain-cohere/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-cohere)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-cohere)](https://pypistats.org/packages/langchain-cohere)

!!! note "Reference docs"
    This page contains **reference documentation** for Cohere. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/cohere) for conceptual guides, tutorials, and examples on using Cohere modules.
- [Community](/python/langchain-community)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?label=%20)](https://pypi.org/project/langchain-community/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-community)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-community)](https://pypistats.org/packages/langchain-community)

Reference documentation for the [`langchain-community`](https://pypi.org/project/langchain-community/) package.
- [Db2](/python/langchain-db2)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-db2?label=%20)](https://pypi.org/project/langchain-db2/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-db2)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-db2)](https://pypistats.org/packages/langchain-db2)

!!! note "Reference docs"
    This page contains **reference documentation** for IBM DB2. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/ibm#db2) for conceptual guides, tutorials, and examples on using DB2 modules.
- [DeepSeek](/python/langchain-deepseek)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-deepseek?label=%20)](https://pypi.org/project/langchain-deepseek/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-deepseek)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-deepseek)](https://pypistats.org/packages/langchain-deepseek)

!!! note "Reference docs"
    This page contains **reference documentation** for DeepSeek. See [the docs](https://docs.langchain.com/oss/python/integrations/chat/deepseek) for conceptual guides, tutorials, and examples on using `ChatDeepSeek`.
- [Elasticsearch](/python/langchain-elasticsearch)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-elasticsearch?label=%20)](https://pypi.org/project/langchain-elasticsearch/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-elasticsearch)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-elasticsearch)](https://pypistats.org/packages/langchain-elasticsearch)
- [Exa](/python/langchain-exa)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-exa?label=%20)](https://pypi.org/project/langchain-exa/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-exa)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-exa)](https://pypistats.org/packages/langchain-exa)

!!! note "Reference docs"
    This page contains **reference documentation** for Exa. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/exa_search) for conceptual guides, tutorials, and examples on using Exa modules.
- [Fireworks](/python/langchain-fireworks)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-fireworks?label=%20)](https://pypi.org/project/langchain-fireworks/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-fireworks)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-fireworks)](https://pypistats.org/packages/langchain-fireworks)

!!! note "Reference docs"
    This page contains **reference documentation** for Fireworks. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/fireworks) for conceptual guides, tutorials, and examples on using Fireworks modules.
- [Google (Community)](/python/langchain-google-community)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-community?label=%20)](https://pypi.org/project/langchain-google-community/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-google-community)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-google-community)](https://pypistats.org/packages/langchain-google-community)

!!! note "Reference docs"
    This page contains **reference documentation** for Google Community. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/google) for conceptual guides, tutorials, and examples on using Google Community modules.
- [Google GenAI (Gemini)](/python/langchain-google-genai)
  #  `langchain-google-genai`

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-genai?label=%20)](https://pypi.org/project/langchain-google-genai/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-google-genai)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-google-genai)](https://pypistats.org/packages/langchain-google-genai)

LangChain integration for Google's Generative AI models, providing access to Gemini models via both the **Gemini Developer API** and **Vertex AI**.

!!! note "Vertex AI consolidation"

    As of `langchain-google-genai` 4.0.0, this package uses the consolidated [`google-genai`](https://googleapis.github.io/python-genai/) SDK instead of the legacy [`google-ai-generativelanguage`](https://googleapis.dev/python/generativelanguage/latest/) SDK.

    This migration brings support for Gemini models both via the Gemini Developer API and Gemini API in Vertex AI, superseding certain classes in `langchain-google-vertexai`, such as `ChatVertexAI`. Refer to [the provider docs](https://docs.langchain.com/oss/python/integrations/providers/google) and [release notes](https://github.com/langchain-ai/langchain-google/discussions/1422) for more information.

## Modules

> **Usage documentation**
> Refer to [the docs](https://docs.langchain.com/oss/python/integrations/providers/google) for a high-level guide on how to use each module. These reference pages contain auto-generated API documentation for each module, focusing on the "what" rather than the "how" or "why" (i.e. no end-to-end tutorials or conceptual overviews).

<div class="grid-cards">
  <a href="/python/integrations/langchain_google_genai/chat-google-generative-ai/" class="grid-card">
    <div class="grid-card-title"><code>ChatGoogleGenerativeAI</code></div>
    <p>Gemini chat models (primary interface).</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/google-generative-ai/" class="grid-card">
    <div class="grid-card-title"><code>GoogleGenerativeAI</code></div>
    <p>(Legacy) Google text completion abstraction.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/google-generative-ai-embeddings/" class="grid-card">
    <div class="grid-card-title"><code>GoogleGenerativeAIEmbeddings</code></div>
    <p>Gemini embedding models.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/create-context-cache/" class="grid-card">
    <div class="grid-card-title"><code>create_context_cache</code></div>
    <p>Utility to create a context cache for reusing large content across requests.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/harm-category/" class="grid-card">
    <div class="grid-card-title"><code>HarmCategory</code></div>
    <p>Enum for safety setting harm categories.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/harm-block-threshold/" class="grid-card">
    <div class="grid-card-title"><code>HarmBlockThreshold</code></div>
    <p>Enum for safety setting blocking thresholds.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/modality/" class="grid-card">
    <div class="grid-card-title"><code>Modality</code></div>
    <p>Enum for input/output modality configuration.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/media-resolution/" class="grid-card">
    <div class="grid-card-title"><code>MediaResolution</code></div>
    <p>Enum for media resolution settings.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/computer-use/" class="grid-card">
    <div class="grid-card-title"><code>ComputerUse</code></div>
    <p>Enum for computer use capabilities.</p>
  </a>
  <a href="/python/integrations/langchain_google_genai/environment/" class="grid-card">
    <div class="grid-card-title"><code>Environment</code></div>
    <p>Enum for environment configuration.</p>
  </a>
</div>
- [Google Vertex AI](/python/langchain-google-vertexai)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-vertexai?label=%20)](https://pypi.org/project/langchain-google-vertexai/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-google-vertexai)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-google-vertexai)](https://pypistats.org/packages/langchain-google-vertexai)

LangChain integration for Google's Vertex AI Platform.

!!! note "Vertex AI consolidation"

    As of `langchain-google-vertexai` 3.2.0, certain classes are deprecated in favor of equivalents in `langchain-google-genai` 4.0.0, which uses the consolidated [`google-genai`](https://googleapis.github.io/python-genai/) SDK.

    Refer to [the provider docs](https://docs.langchain.com/oss/python/integrations/providers/google) and [release notes](https://github.com/langchain-ai/langchain-google/discussions/1422) for more information.

## Modules

!!! note "Usage documentation"
    Refer to [the docs](https://docs.langchain.com/oss/python/integrations/providers/google) for a high-level guide on how to use each module. These reference pages contain auto-generated API documentation for each module, focusing on the "what" rather than the "how" or "why" (i.e. no end-to-end tutorials or conceptual overviews).

<div class="grid-cards">
  <a href="./ChatVertexAI" class="grid-card">
    <div class="grid-card-title"><code>ChatVertexAI</code></div>
    <p><strong>Deprecated:</strong> Use <code>ChatGoogleGenerativeAI</code> from <code>langchain-google-genai</code> instead.</p>
  </a>
  <a href="./VertexAI" class="grid-card">
    <div class="grid-card-title"><code>VertexAI</code></div>
    <p><strong>Deprecated:</strong> Use <code>GoogleGenerativeAI</code> from <code>langchain-google-genai</code> instead.</p>
  </a>
  <a href="./VertexAIEmbeddings" class="grid-card">
    <div class="grid-card-title"><code>VertexAIEmbeddings</code></div>
    <p><strong>Deprecated:</strong> Use <code>GoogleGenerativeAIEmbeddings</code> from <code>langchain-google-genai</code> instead.</p>
  </a>
</div>
- [Groq](/python/langchain-groq)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-groq?label=%20)](https://pypi.org/project/langchain-groq/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-groq)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-groq)](https://pypistats.org/packages/langchain-groq)

!!! note "Reference docs"
    This page contains **reference documentation** for Groq. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/groq) for conceptual guides, tutorials, and examples on using Groq modules.
- [HuggingFace](/python/langchain-huggingface)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-huggingface?label=%20)](https://pypi.org/project/langchain-huggingface/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-huggingface)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-huggingface)](https://pypistats.org/packages/langchain-huggingface)

!!! note "Reference docs"
    This page contains **reference documentation** for Hugging Face. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/huggingface) for conceptual guides, tutorials, and examples on using Hugging Face modules.
- [IBM](/python/langchain-ibm)
  # `langchain-ibm`

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-ibm?label=%20)](https://pypi.org/project/langchain-ibm/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-ibm)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-ibm)](https://pypistats.org/packages/langchain-ibm)

## Modules

> **Usage documentation**
> Refer to [the docs](https://docs.langchain.com/oss/python/integrations/providers/ibm) for a high-level guide on how to use each module. These reference pages contain auto-generated API documentation for each module, focusing on the "what" rather than the "how" or "why" (i.e. no end-to-end tutorials or conceptual overviews).

<div class="grid-cards">
  <a href="/python/integrations/langchain_ibm/chat-watsonx/" class="grid-card">
    <div class="grid-card-title"><code>ChatWatsonx</code></div>
    <p>IBM watsonx.ai chat models.</p>
  </a>
  <a href="/python/integrations/langchain_ibm/watsonx-llm/" class="grid-card">
    <div class="grid-card-title"><code>WatsonxLLM</code></div>
    <p>(Legacy) IBM watsonx.ai text completion models.</p>
  </a>
  <a href="/python/integrations/langchain_ibm/watsonx-embeddings/" class="grid-card">
    <div class="grid-card-title"><code>WatsonxEmbeddings</code></div>
    <p>IBM watsonx.ai embedding models.</p>
  </a>
  <a href="/python/integrations/langchain_ibm/watsonx-rerank/" class="grid-card">
    <div class="grid-card-title"><code>WatsonxRerank</code></div>
    <p>IBM watsonx.ai document retriever.</p>
  </a>
  <a href="/python/integrations/langchain_ibm/watsonx-toolkit/" class="grid-card">
    <div class="grid-card-title"><code>WatsonxToolkit</code></div>
    <p>IBM watsonx.ai toolkit.</p>
  </a>
  <a href="/python/integrations/langchain_ibm/watsonx-sql-database-toolkit/" class="grid-card">
    <div class="grid-card-title"><code>WatsonxSQLDatabaseToolkit</code></div>
    <p>IBM watsonx.ai SQL database toolkit.</p>
  </a>
</div>
- [LiteLLM](/python/langchain-litellm)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?label=%20)](https://pypi.org/project/langchain-litellm/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-litellm)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-litellm)](https://pypistats.org/packages/langchain-litellm)

## Modules

<div class="grid-cards">
  <a href="./chat-models" class="grid-card">
    <div class="grid-card-title"><code>ChatLiteLLM</code></div>
    <p>Chat model integration for LiteLLM.</p>
  </a>
  <a href="./embeddings" class="grid-card">
    <div class="grid-card-title"><code>LiteLLMEmbeddings</code></div>
    <p>Embedding model integration for LiteLLM.</p>
  </a>
  <a href="./document-loaders" class="grid-card">
    <div class="grid-card-title"><code>LiteLLMOCRLoader</code></div>
    <p>OCR-based document loader powered by LiteLLM.</p>
  </a>
</div>
- [Milvus](/python/langchain-milvus)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-milvus?label=%20)](https://pypi.org/project/langchain-milvus/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-milvus)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-milvus)](https://pypistats.org/packages/langchain-milvus)

!!! note "Reference docs"
    This page contains **reference documentation** for Milvus. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/milvus) for conceptual guides, tutorials, and examples on using Milvus modules.
- [Mistral AI](/python/langchain-mistralai)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-mistralai?label=%20)](https://pypi.org/project/langchain-mistralai/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-mistralai)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-mistralai)](https://pypistats.org/packages/langchain-mistralai)

!!! note "Reference docs"
    This page contains **reference documentation** for Mistral AI. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/mistralai) for conceptual guides, tutorials, and examples on using Mistral AI modules.
- [MongoDB](/python/langchain-mongodb)
  Documentation for `langchain-mongodb` is hosted separately by the MongoDB team. Visit their [official API reference site](https://langchain-mongodb.readthedocs.io/en/latest/index.html) to get started with this integration.
- [Neo4J](/python/langchain-neo4j)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-neo4j?label=%20)](https://pypi.org/project/langchain-neo4j/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-neo4j)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-neo4j)](https://pypistats.org/packages/langchain-neo4j)

!!! note "Reference docs"
    This page contains **reference documentation** for Neo4j. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/neo4j) for conceptual guides, tutorials, and examples on using Neo4j modules.
- [Nomic](/python/langchain-nomic)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-nomic?label=%20)](https://pypi.org/project/langchain-nomic/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-nomic)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-nomic)](https://pypistats.org/packages/langchain-nomic)

!!! note "Reference docs"
    This page contains **reference documentation** for Nomic. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/nomic) for conceptual guides, tutorials, and examples on using Nomic modules.
- [Nvidia AI Endpoints](/python/langchain-nvidia-ai-endpoints)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-nvidia-ai-endpoints?label=%20)](https://pypi.org/project/langchain-nvidia-ai-endpoints/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-nvidia-ai-endpoints)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-nvidia-ai-endpoints)](https://pypistats.org/packages/langchain-nvidia-ai-endpoints)
- [Ollama](/python/langchain-ollama)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-ollama?label=%20)](https://pypi.org/project/langchain-ollama/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-ollama)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-ollama)](https://pypistats.org/packages/langchain-ollama)

!!! note "Reference docs"
    This page contains **reference documentation** for Ollama. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/ollama) for conceptual guides, tutorials, and examples on using Ollama modules.
- [OpenAI](/python/langchain-openai)
  # `langchain-openai`

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?label=%20)](https://pypi.org/project/langchain-openai/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-openai)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-openai)](https://pypistats.org/packages/langchain-openai)

## Modules

> **Usage documentation**
> Refer to [the docs](https://docs.langchain.com/oss/python/integrations/providers/openai) for a high-level guide on how to use each module. These reference pages contain auto-generated API documentation for each module, focusing on the "what" rather than the "how" or "why" (i.e. no end-to-end tutorials or conceptual overviews).

<div class="grid-cards">
  <a href="/python/integrations/langchain_openai/chat-openai/" class="grid-card">
    <div class="grid-card-title"><code>ChatOpenAI</code></div>
    <p>OpenAI chat models.</p>
  </a>
  <a href="/python/integrations/langchain_openai/azure-chat-openai/" class="grid-card">
    <div class="grid-card-title"><code>AzureChatOpenAI</code></div>
    <p>Wrapper for OpenAI chat models hosted on Azure.</p>
  </a>
  <a href="/python/integrations/langchain_openai/openai/" class="grid-card">
    <div class="grid-card-title"><code>OpenAI</code></div>
    <p>(Legacy) OpenAI text completion models.</p>
  </a>
  <a href="/python/integrations/langchain_openai/azure-openai/" class="grid-card">
    <div class="grid-card-title"><code>AzureOpenAI</code></div>
    <p>Wrapper for (legacy) OpenAI text completion models hosted on Azure.</p>
  </a>
  <a href="/python/integrations/langchain_openai/openai-embeddings/" class="grid-card">
    <div class="grid-card-title"><code>OpenAIEmbeddings</code></div>
    <p>OpenAI embedding models.</p>
  </a>
  <a href="/python/integrations/langchain_openai/azure-openai-embeddings/" class="grid-card">
    <div class="grid-card-title"><code>AzureOpenAIEmbeddings</code></div>
    <p>Wrapper for OpenAI embedding models hosted on Azure.</p>
  </a>
  <a href="/python/integrations/langchain_openai/base-chat-openai/" class="grid-card">
    <div class="grid-card-title"><code>BaseChatOpenAI</code></div>
    <p>Base class for OpenAI chat models.</p>
  </a>
  <a href="/python/integrations/langchain_openai/middleware/" class="grid-card">
    <div class="grid-card-title">Middleware</div>
    <p>OpenAI-specific middleware for moderation and safety.</p>
  </a>
</div>
- [OpenRouter](/python/langchain-openrouter)
  # `langchain-openrouter`

[![PyPI - Version](https://img.shields.io/pypi/v/langchain-openrouter?label=%20)](https://pypi.org/project/langchain-openrouter/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-openrouter)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-openrouter)](https://pypistats.org/packages/langchain-openrouter)

!!! note "Reference docs"
 This page contains **reference documentation** for OpenRouter. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/openrouter) for conceptual guides, tutorials, and examples on using `ChatOpenRouter`.
- [Parallel](/python/langchain-parallel)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-parallel?label=%20)](https://pypi.org/project/langchain-parallel/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-parallel)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-parallel)](https://pypistats.org/packages/langchain-parallel)

!!! warning "Externally maintained"
    These docs are built from the [langchain-parallel repo](https://github.com/parallel-web/langchain-parallel) and have not been verified for accuracy by the LangChain team.

    For issues, please open an issue in the [langchain-parallel repo](https://github.com/parallel-web/langchain-parallel/issues).

## Modules

!!! note "Usage documentation"
    Refer to [the docs](https://docs.langchain.com/oss/python/integrations/providers/parallel) for a high-level guide on how to use each module. These reference pages contain auto-generated API documentation for each module, focusing on the "what" rather than the "how" or "why" (i.e. no end-to-end tutorials or conceptual overviews).

<div class="grid-cards">
  <a href="./ChatParallelWeb" class="grid-card">
    <div class="grid-card-title"><code>ChatParallelWeb</code></div>
    <p>Chat model with real-time web research capabilities.</p>
  </a>
  <a href="./ParallelWebSearchTool" class="grid-card">
    <div class="grid-card-title"><code>ParallelWebSearchTool</code></div>
    <p>Searches the web.</p>
  </a>
  <a href="./ParallelExtractTool" class="grid-card">
    <div class="grid-card-title"><code>ParallelExtractTool</code></div>
    <p>Extracts relevant content from specific web URLs.</p>
  </a>
</div>
- [Perplexity](/python/langchain-perplexity)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-perplexity?label=%20)](https://pypi.org/project/langchain-perplexity/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-perplexity)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-perplexity)](https://pypistats.org/packages/langchain-perplexity)

!!! note "Reference docs"
    This page contains **reference documentation** for Perplexity. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/perplexity) for conceptual guides, tutorials, and examples on using Perplexity modules.
- [Pinecone](/python/langchain-pinecone)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-pinecone?label=%20)](https://pypi.org/project/langchain-pinecone/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-pinecone)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-pinecone)](https://pypistats.org/packages/langchain-pinecone)
- [Postgres](/python/langchain-postgres)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-postgres?label=%20)](https://pypi.org/project/langchain-postgres/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-postgres)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-postgres)](https://pypistats.org/packages/langchain-postgres)

!!! note
    This package ref has not yet been fully migrated to v1.
- [Qdrant](/python/langchain-qdrant)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-qdrant?label=%20)](https://pypi.org/project/langchain-qdrant/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-qdrant)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-qdrant)](https://pypistats.org/packages/langchain-qdrant)

!!! note "Reference docs"
    This page contains **reference documentation** for Qdrant. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/qdrant) for conceptual guides, tutorials, and examples on using Qdrant modules.
- [Redis](/python/langchain-redis)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-redis?label=%20)](https://pypi.org/project/langchain-redis/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-redis)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-redis)](https://pypistats.org/packages/langchain-redis)
- [Sema4](/python/langchain-sema4)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-sema4?label=%20)](https://pypi.org/project/langchain-sema4/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-sema4)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-sema4)](https://pypistats.org/packages/langchain-sema4)

!!! note
    This package ref has not yet been fully migrated to v1.
- [Snowflake](/python/langchain-snowflake)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-snowflake?label=%20)](https://pypi.org/project/langchain-snowflake/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-snowflake)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-snowflake)](https://pypistats.org/packages/langchain-snowflake)

!!! note
    This package ref has not yet been fully migrated to v1.
- [SQLServer](/python/langchain-sqlserver)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-sqlserver?label=%20)](https://pypi.org/project/langchain-sqlserver/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-sqlserver)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-sqlserver)](https://pypistats.org/packages/langchain-sqlserver)

!!! note
    This package ref has not yet been fully migrated to v1.
- [Tavily](/python/langchain-tavily)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-tavily?label=%20)](https://pypi.org/project/langchain-tavily/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-tavily)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-tavily)](https://pypistats.org/packages/langchain-tavily)

!!! warning "Externally maintained"
    These docs are built from the [langchain-tavily repo](https://github.com/tavily-ai/langchain-tavily) and have not been verified for accuracy by the LangChain team.

    For issues, please open an issue in the [langchain-tavily repo](https://github.com/tavily-ai/langchain-tavily/issues).

!!! note "Reference docs"
    This page contains **reference documentation** for Tavily. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/tavily) for conceptual guides, tutorials, and examples on using Tavily modules.
- [Together](/python/langchain-together)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-together?label=%20)](https://pypi.org/project/langchain-together/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-together)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-together)](https://pypistats.org/packages/langchain-together)

!!! note
    This package ref has not yet been fully migrated to v1.
- [Unstructured](/python/langchain-unstructured)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-unstructured?label=%20)](https://pypi.org/project/langchain-unstructured/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-unstructured)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-unstructured)](https://pypistats.org/packages/langchain-unstructured)
- [Upstage](/python/langchain-upstage)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-upstage?label=%20)](https://pypi.org/project/langchain-upstage/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-upstage)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-upstage)](https://pypistats.org/packages/langchain-upstage)
- [Weaviate](/python/langchain-weaviate)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-weaviate?label=%20)](https://pypi.org/project/langchain-weaviate/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-weaviate)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-weaviate)](https://pypistats.org/packages/langchain-weaviate)

!!! note "Reference docs"
    This page contains **reference documentation** for Weaviate. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/weaviate) for conceptual guides, tutorials, and examples on using Weaviate modules.
- [xAI](/python/langchain-xai)
  [![PyPI - Version](https://img.shields.io/pypi/v/langchain-xai?label=%20)](https://pypi.org/project/langchain-xai/#history)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-xai)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-xai)](https://pypistats.org/packages/langchain-xai)

!!! note "Reference docs"
    This page contains **reference documentation** for xAI. See [the docs](https://docs.langchain.com/oss/python/integrations/providers/xai) for conceptual guides, tutorials, and examples on using xAI modules.
- [LangSmith](/python/langsmith)
  # LangSmith Client SDK

[![Release Notes](https://img.shields.io/github/release/langchain-ai/langsmith-sdk?logo=python)](https://github.com/langchain-ai/langsmith-sdk/releases)
[![Python Downloads](https://static.pepy.tech/badge/langsmith/month)](https://pepy.tech/project/langsmith)

This package contains the Python client for interacting with the [LangSmith platform](https://smith.langchain.com/).

To install:

```bash
pip install -U langsmith
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=ls_...
```

Then trace:

```python
import openai
from langsmith.wrappers import wrap_openai
from langsmith import traceable

# Auto-trace LLM calls in-context
client = wrap_openai(openai.Client())

@traceable # Auto-trace this function
def pipeline(user_input: str):
    result = client.chat.completions.create(
        messages=[{"role": "user", "content": user_input}],
        model="gpt-3.5-turbo"
    )
    return result.choices[0].message.content

pipeline("Hello, world!")
```

See the resulting nested trace [🌐 here](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).

LangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM application.

> **Cookbook:** For tutorials on how to get more value out of LangSmith, check out the [Langsmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook/tree/main) repo.

A typical workflow looks like:

1. Set up an account with LangSmith.
2. Log traces while debugging and prototyping.
3. Run benchmark evaluations and continuously improve with the collected data.

We'll walk through these steps in more detail below.

## 1. Connect to LangSmith

Sign up for [LangSmith](https://smith.langchain.com/) using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.

Then, create a unique API key on the [Settings Page](https://smith.langchain.com/settings), which is found in the menu at the top right corner of the page.

> [!NOTE]
> Save the API Key in a secure location. It will not be shown again.

## 2. Log Traces

You can log traces natively using the LangSmith SDK or within your LangChain application.

### Logging Traces with LangChain

LangSmith seamlessly integrates with the Python LangChain library to record traces from your LLM applications.

1. **Copy the environment variables from the Settings Page and add them to your application.**

Tracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.

```python
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
# os.environ["LANGSMITH_ENDPOINT"] = "https://eu.api.smith.langchain.com" # If signed up in the EU region
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
# os.environ["LANGSMITH_WORKSPACE_ID"] = "<YOUR-WORKSPACE-ID>" # Required for org-scoped API keys
```

> **Tip:** Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to `default`.

2. **Run an Agent, Chain, or Language Model in LangChain**

If the environment variables are correctly set, your application will automatically connect to the LangSmith platform.

```python
from langchain_core.runnables import chain

@chain
def add_val(x: dict) -> dict:
    return {"val": x["val"] + 1}

add_val({"val": 1})
```

### Logging Traces Outside LangChain

You can still use the LangSmith development platform without depending on any
LangChain code.

1. **Copy the environment variables from the Settings Page and add them to your application.**

```python
import os
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
```

2. **Log traces**

The easiest way to log traces using the SDK is via the `@traceable` decorator. Below is an example.

```python
from datetime import datetime
from typing import List, Optional, Tuple

import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai

client = wrap_openai(openai.Client())

@traceable
def argument_generator(query: str, additional_description: str = "") -> str:
    return client.chat.completions.create(
        [
            {"role": "system", "content": "You are a debater making an argument on a topic."
             f"{additional_description}"
             f" The current time is {datetime.now()}"},
            {"role": "user", "content": f"The discussion topic is {query}"}
        ]
    ).choices[0].message.content



@traceable
def argument_chain(query: str, additional_description: str = "") -> str:
    argument = argument_generator(query, additional_description)
    # ... Do other processing or call other functions...
    return argument

argument_chain("Why is blue better than orange?")
```

Alternatively, you can manually log events using the `Client` directly or using a `RunTree`, which is what the traceable decorator is meant to manage for you!

A RunTree tracks your application. Each RunTree object is required to have a `name` and `run_type`. These and other important attributes are as follows:

- `name`: `str` - used to identify the component's purpose
- `run_type`: `str` - Currently one of "llm", "chain" or "tool"; more options will be added in the future
- `inputs`: `dict` - the inputs to the component
- `outputs`: `Optional[dict]` - the (optional) returned values from the component
- `error`: `Optional[str]` - Any error messages that may have arisen during the call

```python
from langsmith.run_trees import RunTree

parent_run = RunTree(
    name="My Chat Bot",
    run_type="chain",
    inputs={"text": "Summarize this morning's meetings."},
    # project_name= "Defaults to the LANGSMITH_PROJECT env var"
)
parent_run.post()
# .. My Chat Bot calls an LLM
child_llm_run = parent_run.create_child(
    name="My Proprietary LLM",
    run_type="llm",
    inputs={
        "prompts": [
            "You are an AI Assistant. The time is XYZ."
            " Summarize this morning's meetings."
        ]
    },
)
child_llm_run.post()
child_llm_run.end(
    outputs={
        "generations": [
            "I should use the transcript_loader tool"
            " to fetch meeting_transcripts from XYZ"
        ]
    }
)
child_llm_run.patch()
# ..  My Chat Bot takes the LLM output and calls
# a tool / function for fetching transcripts ..
child_tool_run = parent_run.create_child(
    name="transcript_loader",
    run_type="tool",
    inputs={"date": "XYZ", "content_type": "meeting_transcripts"},
)
child_tool_run.post()
# The tool returns meeting notes to the chat bot
child_tool_run.end(outputs={"meetings": ["Meeting1 notes.."]})
child_tool_run.patch()

child_chain_run = parent_run.create_child(
    name="Unreliable Component",
    run_type="tool",
    inputs={"input": "Summarize these notes..."},
)
child_chain_run.post()

try:
    # .... the component does work
    raise ValueError("Something went wrong")
    child_chain_run.end(outputs={"output": "foo"}
    child_chain_run.patch()
except Exception as e:
    child_chain_run.end(error=f"I errored again {e}")
    child_chain_run.patch()
    pass
# .. The chat agent recovers

parent_run.end(outputs={"output": ["The meeting notes are as follows:..."]})
res = parent_run.patch()
res.result()
```

## Create a Dataset from Existing Runs

Once your runs are stored in LangSmith, you can convert them into a dataset.
For this example, we will do so using the Client, but you can also do this using
the web interface, as explained in the [LangSmith docs](https://docs.smith.langchain.com/).

```python
from langsmith import Client

client = Client()
dataset_name = "Example Dataset"
# We will only use examples from the top level AgentExecutor run here,
# and exclude runs that errored.
runs = client.list_runs(
    project_name="my_project",
    execution_order=1,
    error=False,
)

dataset = client.create_dataset(dataset_name, description="An example dataset")
for run in runs:
    client.create_example(
        inputs=run.inputs,
        outputs=run.outputs,
        dataset_id=dataset.id,
    )
```

## Evaluating Runs

Check out the [LangSmith Testing & Evaluation dos](https://docs.smith.langchain.com/evaluation) for up-to-date workflows.

For generating automated feedback on individual runs, you can run evaluations directly using the LangSmith client.

```python
from typing import Optional
from langsmith.evaluation import StringEvaluator


def jaccard_chars(output: str, answer: str) -> float:
    """Naive Jaccard similarity between two strings."""
    prediction_chars = set(output.strip().lower())
    answer_chars = set(answer.strip().lower())
    intersection = prediction_chars.intersection(answer_chars)
    union = prediction_chars.union(answer_chars)
    return len(intersection) / len(union)


def grader(run_input: str, run_output: str, answer: Optional[str]) -> dict:
    """Compute the score and/or label for this run."""
    if answer is None:
        value = "AMBIGUOUS"
        score = 0.5
    else:
        score = jaccard_chars(run_output, answer)
        value = "CORRECT" if score > 0.9 else "INCORRECT"
    return dict(score=score, value=value)

evaluator = StringEvaluator(evaluation_name="Jaccard", grading_function=grader)

runs = client.list_runs(
    project_name="my_project",
    execution_order=1,
    error=False,
)
for run in runs:
    client.evaluate_run(run, evaluator)
```

## Integrations

LangSmith easily integrates with your favorite LLM framework.

## OpenAI SDK

<!-- markdown-link-check-disable -->

We provide a convenient wrapper for the [OpenAI SDK](https://platform.openai.com/docs/api-reference).

In order to use, you first need to set your LangSmith API key.

```shell
export LANGSMITH_API_KEY=<your-api-key>
```

Next, you will need to install the LangSmith SDK:

```shell
pip install -U langsmith
```

After that, you can wrap the OpenAI client:

```python
from openai import OpenAI
from langsmith import wrappers

client = wrappers.wrap_openai(OpenAI())
```

Now, you can use the OpenAI client as you normally would, but now everything is logged to LangSmith!

```python
client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Say this is a test"}],
)
```

Oftentimes, you use the OpenAI client inside of other functions.
You can get nested traces by using this wrapped client and decorating those functions with `@traceable`.
See [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator

```python
from langsmith import traceable

@traceable(name="Call OpenAI")
def my_function(text: str):
    return client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": f"Say {text}"}],
    )

my_function("hello world")
```

## Instructor

We provide a convenient integration with [Instructor](https://jxnl.github.io/instructor/), largely by virtue of it essentially just using the OpenAI SDK.

In order to use, you first need to set your LangSmith API key.

```shell
export LANGSMITH_API_KEY=<your-api-key>
```

Next, you will need to install the LangSmith SDK:

```shell
pip install -U langsmith
```

After that, you can wrap the OpenAI client:

```python
from openai import OpenAI
from langsmith import wrappers

client = wrappers.wrap_openai(OpenAI())
```

After this, you can patch the OpenAI client using `instructor`:

```python
import instructor

client = instructor.patch(OpenAI())
```

Now, you can use `instructor` as you normally would, but now everything is logged to LangSmith!

```python
from pydantic import BaseModel


class UserDetail(BaseModel):
    name: str
    age: int


user = client.chat.completions.create(
    model="gpt-3.5-turbo",
    response_model=UserDetail,
    messages=[
        {"role": "user", "content": "Extract Jason is 25 years old"},
    ]
)
```

Oftentimes, you use `instructor` inside of other functions.
You can get nested traces by using this wrapped client and decorating those functions with `@traceable`.
See [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator

```python
@traceable()
def my_function(text: str) -> UserDetail:
    return client.chat.completions.create(
        model="gpt-3.5-turbo",
        response_model=UserDetail,
        messages=[
            {"role": "user", "content": f"Extract {text}"},
        ]
    )


my_function("Jason is 25 years old")
```

## Pytest Plugin

The LangSmith pytest plugin lets Python developers define their datasets and evaluations as pytest test cases.
See [online docs](https://docs.smith.langchain.com/evaluation/how_to_guides/pytest) for more information.

This plugin is installed as part of the LangSmith SDK, and is enabled by default.
See also official pytest docs: [How to install and use plugins](https://docs.pytest.org/en/stable/how-to/plugins.html)

## Additional Documentation

To learn more about the LangSmith platform, check out the [docs](https://docs.smith.langchain.com/).


# License

The LangSmith SDK is licensed under the [MIT License](https://github.com/langchain-ai/langsmith-sdk/tree/da0d6e3f137ff375d787c4ba773c8b8d7832d0d8/python/../LICENSE).

The copyright information for certain dependencies' are reproduced in their corresponding COPYRIGHT.txt files in this repo, including the following:

- [uuid-utils](https://github.com/langchain-ai/langsmith-sdk/blob/da0d6e3f137ff375d787c4ba773c8b8d7832d0d8/python/docs/templates/uuid-utils/COPYRIGHT.txt)
- [zstandard](https://github.com/langchain-ai/langsmith-sdk/blob/da0d6e3f137ff375d787c4ba773c8b8d7832d0d8/python/docs/templates/zstandard/COPYRIGHT.txt)
