Agents
Reference docs
This page contains reference documentation for Agents. See the docs for conceptual guides, tutorials, and examples on using Agents.
langchain.agents
¶
Entrypoint to building Agents with LangChain.
create_agent
¶
create_agent(
model: str | BaseChatModel,
tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
*,
system_prompt: str | None = None,
middleware: Sequence[AgentMiddleware[StateT_co, ContextT]] = (),
response_format: ResponseFormat[ResponseT] | type[ResponseT] | None = None,
state_schema: type[AgentState[ResponseT]] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
name: str | None = None,
cache: BaseCache | None = None,
) -> CompiledStateGraph[
AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
]
Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_agent,
visit the Agents docs.
| PARAMETER | DESCRIPTION |
|---|---|
|
The language model for the agent. Can be a string identifier (e.g., For a full list of supported model strings, see
See the Models docs for more information.
TYPE:
|
|
A list of tools, If See the Tools docs for more information.
TYPE:
|
|
An optional system prompt for the LLM. Prompts are converted to a
TYPE:
|
|
A sequence of middleware instances to apply to the agent. Middleware can intercept and modify agent behavior at various stages. See the Middleware docs for more information.
TYPE:
|
|
An optional configuration for structured responses. Can be a If provided, the agent will handle structured output during the conversation flow. Raw schemas will be wrapped in an appropriate strategy based on model capabilities. See the Structured output docs for more information.
TYPE:
|
|
An optional When provided, this schema is used instead of Generally, it's recommended to use
TYPE:
|
|
An optional schema for runtime context.
TYPE:
|
|
An optional checkpoint saver object. Used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
TYPE:
|
|
An optional store object. Used for persisting data across multiple threads (e.g., multiple conversations / users).
TYPE:
|
|
An optional list of node names to interrupt before. Useful if you want to add a user confirmation or other interrupt before taking an action. |
|
An optional list of node names to interrupt after. Useful if you want to return directly or run additional processing on an output. |
|
Whether to enable verbose logging for graph execution. When enabled, prints detailed information about each node execution, state updates, and transitions during agent runtime. Useful for debugging middleware behavior and understanding agent execution flow.
TYPE:
|
|
An optional name for the This name will be automatically used when adding the agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.
TYPE:
|
|
An optional
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]
|
A compiled |
The agent node calls the language model with the messages list (after applying
the system prompt). If the resulting AIMessage
contains tool_calls, the graph will then call the tools. The tools node executes
the tools and adds the responses to the messages list as
ToolMessage objects. The agent node then calls
the language model again. The process repeats until no more tool_calls are present
in the response. The agent then returns the full list of messages.
Example
from langchain.agents import create_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_agent(
model="anthropic:claude-sonnet-4-5-20250929",
tools=[check_weather],
system_prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)