Base classes and utilities for LangChain tools.
Ensure that a config is a dict with all keys present.
Patch a config with new values.
Run a function in an executor.
Set the child Runnable config + tracing context.
Await a coroutine with a context.
Return the field names of a Pydantic model.
Check if the given class is a subclass of Pydantic BaseModel.
Check if the given class is a subclass of any of the following:
pydantic.BaseModel in Pydantic 2.xpydantic.v1.BaseModel in Pydantic 2.xCheck if the given class is Pydantic v1-like.
Check if the given class is Pydantic v2-like.
Create a Pydantic schema from a function's signature.
Get all annotations from a Pydantic BaseModel and its parents.
Async callback manager that handles callbacks from LangChain.
Callback manager for LangChain.
Represents an AI's request to call a tool.
Message for passing the result of executing a tool back to a model.
ToolMessage objects contain the result of a tool invocation. Typically, the result
is encoded inside the content field.
tool_call_id is used to associate the tool call request with the tool call
response. Useful in situations where a chat model is able to request multiple tool
calls in parallel.
Mixin for objects that tools can return directly.
If a custom BaseTool is invoked with a ToolCall and the output of custom code is
not an instance of ToolOutputMixin, the output will automatically be coerced to
a string and wrapped in a ToolMessage.
Configuration for a Runnable.
Custom values
The TypedDict has total=False set intentionally to:
merge_configsvar_child_runnable_config (a ContextVar that automatically passes
config down the call stack without explicit parameter passing), where
configs are merged rather than replaced# Parent sets tags
chain.invoke(input, config={"tags": ["parent"]})
# Child automatically inherits and can add:
# ensure_config({"tags": ["child"]}) -> {"tags": ["parent", "child"]}Runnable that can be serialized to JSON.
Raised when args_schema is missing or has an incorrect type annotation.
Exception thrown when a tool execution error occurs.
This exception allows tools to signal errors without stopping the agent.
The error is handled according to the tool's handle_tool_error setting, and the
result is returned as an observation to the agent.
Base class for all LangChain tools.
This abstract class defines the interface that all LangChain tools must implement.
Tools are components that can be called by agents to perform specific actions.
Annotation for tool arguments that are injected at runtime.
Tool arguments annotated with this class are not included in the tool schema sent to language models and are instead injected during execution.
Annotation for injecting the tool call ID.
This annotation is used to mark a tool parameter that should receive the tool call ID at runtime.
from typing import Annotated
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool, InjectedToolCallId
@tool
def foo(
x: int, tool_call_id: Annotated[str, InjectedToolCallId]
) -> ToolMessage:
"""Return x."""
return ToolMessage(
str(x),
artifact=x,
name="foo",
tool_call_id=tool_call_id
)Base class for toolkits containing related tools.
A toolkit is a collection of related tools that can be used together to accomplish a specific task or work with a particular system.