Skip to content

Tools

langchain.tools.tool

tool(
    name_or_callable: str | Callable | None = None,
    runnable: Runnable | None = None,
    *args: Any,
    description: str | None = None,
    return_direct: bool = False,
    args_schema: ArgsSchema | None = None,
    infer_schema: bool = True,
    response_format: Literal["content", "content_and_artifact"] = "content",
    parse_docstring: bool = False,
    error_on_invalid_docstring: bool = True
) -> BaseTool | Callable[[Callable | Runnable], BaseTool]

Make tools out of Python functions, can be used with or without arguments.

PARAMETER DESCRIPTION
name_or_callable

Optional name of the tool or the callable to be converted to a tool. Must be provided as a positional argument.

TYPE: str | Callable | None DEFAULT: None

runnable

Optional runnable to convert to a tool. Must be provided as a positional argument.

TYPE: Runnable | None DEFAULT: None

description

Optional description for the tool. Precedence for the tool description value is as follows:

  • description argument (used even if docstring and/or args_schema are provided)
  • Tool function docstring (used even if args_schema is provided)
  • args_schema description (used only if description / docstring are not provided)

TYPE: str | None DEFAULT: None

*args

Extra positional arguments. Must be empty.

TYPE: Any DEFAULT: ()

return_direct

Whether to return directly from the tool rather than continuing the agent loop.

TYPE: bool DEFAULT: False

args_schema

Optional argument schema for user to specify.

TYPE: ArgsSchema | None DEFAULT: None

infer_schema

Whether to infer the schema of the arguments from the function's signature. This also makes the resultant tool accept a dictionary input to its run() function.

TYPE: bool DEFAULT: True

response_format

The tool response format. If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.

TYPE: Literal['content', 'content_and_artifact'] DEFAULT: 'content'

parse_docstring

if infer_schema and parse_docstring, will attempt to parse parameter descriptions from Google Style function docstrings.

TYPE: bool DEFAULT: False

error_on_invalid_docstring

if parse_docstring is provided, configure whether to raise ValueError on invalid Google Style docstrings.

TYPE: bool DEFAULT: True

RAISES DESCRIPTION
ValueError

If too many positional arguments are provided.

ValueError

If a runnable is provided without a string name.

ValueError

If the first argument is not a string or callable with a __name__ attribute.

ValueError

If the function does not have a docstring and description is not provided and infer_schema is False.

ValueError

If parse_docstring is True and the function has an invalid Google-style docstring and error_on_invalid_docstring is True.

ValueError

If a Runnable is provided that does not have an object schema.

RETURNS DESCRIPTION
BaseTool | Callable[[Callable | Runnable], BaseTool]

The tool.

Requires
  • Function must be of type (str) -> str
  • Function must have a docstring

Examples:

@tool
def search_api(query: str) -> str:
    # Searches the API for the query.
    return


@tool("search", return_direct=True)
def search_api(query: str) -> str:
    # Searches the API for the query.
    return


@tool(response_format="content_and_artifact")
def search_api(query: str) -> tuple[str, dict]:
    return "partial json of results", {"full": "object of results"}

Added in version 0.2.14

Parse Google-style docstrings:

@tool(parse_docstring=True)
def foo(bar: str, baz: int) -> str:
    """The foo.

    Args:
        bar: The bar.
        baz: The baz.
    """
    return bar

foo.args_schema.model_json_schema()
{
    "title": "foo",
    "description": "The foo.",
    "type": "object",
    "properties": {
        "bar": {
            "title": "Bar",
            "description": "The bar.",
            "type": "string",
        },
        "baz": {
            "title": "Baz",
            "description": "The baz.",
            "type": "integer",
        },
    },
    "required": ["bar", "baz"],
}

Note that parsing by default will raise ValueError if the docstring is considered invalid. A docstring is considered invalid if it contains arguments not in the function signature, or is unable to be parsed into a summary and "Args:" blocks. Examples below:

# No args section
def invalid_docstring_1(bar: str, baz: int) -> str:
    """The foo."""
    return bar

# Improper whitespace between summary and args section
def invalid_docstring_2(bar: str, baz: int) -> str:
    """The foo.
    Args:
        bar: The bar.
        baz: The baz.
    """
    return bar

# Documented args absent from function signature
def invalid_docstring_3(bar: str, baz: int) -> str:
    """The foo.

    Args:
        banana: The bar.
        monkey: The baz.
    """
    return bar

langchain.tools.BaseTool

Bases: RunnableSerializable[str | dict | ToolCall, Any]

Base class for all LangChain tools.

This abstract class defines the interface that all LangChain tools must implement. Tools are components that can be called by agents to perform specific actions.

METHOD DESCRIPTION
invoke

Transform a single input into an output.

ainvoke

Transform a single input into an output.

get_input_schema

The tool's input schema.

get_output_schema

Get a Pydantic model that can be used to validate output to the Runnable.

name instance-attribute

name: str

The unique name of the tool that clearly communicates its purpose.

description instance-attribute

description: str

Used to tell the model how/when/why to use the tool.

You can provide few-shot examples as a part of the description.

response_format class-attribute instance-attribute

response_format: Literal['content', 'content_and_artifact'] = 'content'

The tool response format.

If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.

invoke

invoke(
    input: str | dict | ToolCall, config: RunnableConfig | None = None, **kwargs: Any
) -> Any

Transform a single input into an output.

PARAMETER DESCRIPTION
input

The input to the Runnable.

TYPE: Input

config

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.

TYPE: RunnableConfig | None DEFAULT: None

RETURNS DESCRIPTION
Output

The output of the Runnable.

ainvoke async

ainvoke(
    input: str | dict | ToolCall, config: RunnableConfig | None = None, **kwargs: Any
) -> Any

Transform a single input into an output.

PARAMETER DESCRIPTION
input

The input to the Runnable.

TYPE: Input

config

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.

TYPE: RunnableConfig | None DEFAULT: None

RETURNS DESCRIPTION
Output

The output of the Runnable.

get_input_schema

get_input_schema(config: RunnableConfig | None = None) -> type[BaseModel]

The tool's input schema.

PARAMETER DESCRIPTION
config

The configuration for the tool.

TYPE: RunnableConfig | None DEFAULT: None

RETURNS DESCRIPTION
type[BaseModel]

The input schema for the tool.

get_output_schema

get_output_schema(config: RunnableConfig | None = None) -> type[BaseModel]

Get a Pydantic model that can be used to validate output to the Runnable.

Runnable objects that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

PARAMETER DESCRIPTION
config

A config to use when generating the schema.

TYPE: RunnableConfig | None DEFAULT: None

RETURNS DESCRIPTION
type[BaseModel]

A Pydantic model that can be used to validate output.

langchain.tools.InjectedState

Bases: InjectedToolArg

Annotation for injecting graph state into tool arguments.

This annotation enables tools to access graph state without exposing state management details to the language model. Tools annotated with InjectedState receive state data automatically during execution while remaining invisible to the model's tool-calling interface.

PARAMETER DESCRIPTION
field

Optional key to extract from the state dictionary. If None, the entire state is injected. If specified, only that field's value is injected. This allows tools to request specific state components rather than processing the full state structure.

TYPE: str | None DEFAULT: None

Example
from typing import List
from typing_extensions import Annotated, TypedDict

from langchain_core.messages import BaseMessage, AIMessage
from langchain.tools import InjectedState, ToolNode, tool


class AgentState(TypedDict):
    messages: List[BaseMessage]
    foo: str


@tool
def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
    '''Do something with state.'''
    if len(state["messages"]) > 2:
        return state["foo"] + str(x)
    else:
        return "not enough messages"


@tool
def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
    '''Do something else with state.'''
    return foo + str(x + 1)


node = ToolNode([state_tool, foo_tool])

tool_call1 = {"name": "state_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
tool_call2 = {"name": "foo_tool", "args": {"x": 1}, "id": "2", "type": "tool_call"}
state = {
    "messages": [AIMessage("", tool_calls=[tool_call1, tool_call2])],
    "foo": "bar",
}
node.invoke(state)
[
    ToolMessage(content="not enough messages", name="state_tool", tool_call_id="1"),
    ToolMessage(content="bar2", name="foo_tool", tool_call_id="2"),
]
Note
  • InjectedState arguments are automatically excluded from tool schemas presented to language models
  • ToolNode handles the injection process during execution
  • Tools can mix regular arguments (controlled by the model) with injected arguments (controlled by the system)
  • State injection occurs after the model generates tool calls but before tool execution
METHOD DESCRIPTION
__init__

Initialize the InjectedState annotation.

__init__

__init__(field: str | None = None) -> None

Initialize the InjectedState annotation.

langchain.tools.InjectedStore

Bases: InjectedToolArg

Annotation for injecting persistent store into tool arguments.

This annotation enables tools to access LangGraph's persistent storage system without exposing storage details to the language model. Tools annotated with InjectedStore receive the store instance automatically during execution while remaining invisible to the model's tool-calling interface.

The store provides persistent, cross-session data storage that tools can use for maintaining context, user preferences, or any other data that needs to persist beyond individual workflow executions.

Warning

InjectedStore annotation requires langchain-core >= 0.3.8

Example
from typing_extensions import Annotated
from langgraph.store.memory import InMemoryStore
from langchain.tools import InjectedStore, ToolNode, tool

@tool
def save_preference(
    key: str,
    value: str,
    store: Annotated[Any, InjectedStore()]
) -> str:
    """Save user preference to persistent storage."""
    store.put(("preferences",), key, value)
    return f"Saved {key} = {value}"

@tool
def get_preference(
    key: str,
    store: Annotated[Any, InjectedStore()]
) -> str:
    """Retrieve user preference from persistent storage."""
    result = store.get(("preferences",), key)
    return result.value if result else "Not found"

Usage with ToolNode and graph compilation:

from langgraph.graph import StateGraph
from langgraph.store.memory import InMemoryStore

store = InMemoryStore()
tool_node = ToolNode([save_preference, get_preference])

graph = StateGraph(State)
graph.add_node("tools", tool_node)
compiled_graph = graph.compile(store=store)  # Store is injected automatically

Cross-session persistence:

# First session
result1 = graph.invoke({"messages": [HumanMessage("Save my favorite color as blue")]})

# Later session - data persists
result2 = graph.invoke({"messages": [HumanMessage("What's my favorite color?")]})
Note
  • InjectedStore arguments are automatically excluded from tool schemas presented to language models
  • The store instance is automatically injected by ToolNode during execution
  • Tools can access namespaced storage using the store's get/put methods
  • Store injection requires the graph to be compiled with a store instance
  • Multiple tools can share the same store instance for data consistency

langchain.tools.InjectedToolArg

Annotation for tool arguments that are injected at runtime.

Tool arguments annotated with this class are not included in the tool schema sent to language models and are instead injected during execution.

langchain.tools.InjectedToolCallId

Bases: InjectedToolArg

Annotation for injecting the tool call ID.

This annotation is used to mark a tool parameter that should receive the tool call ID at runtime.

from typing import Annotated
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool, InjectedToolCallId

@tool
def foo(
    x: int, tool_call_id: Annotated[str, InjectedToolCallId]
) -> ToolMessage:
    """Return x."""
    return ToolMessage(
        str(x),
        artifact=x,
        name="foo",
        tool_call_id=tool_call_id
    )