Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_agent,
visit the Agents docs.
create_agent(
model: str | BaseChatModel,
tools: Sequence[BaseTool | Callable[..., Any] | dict[str, Any]] | None = None,
*,
system_prompt: str | SystemMessage | None = None,
middleware: Sequence[AgentMiddleware[StateT_co, ContextT]] = (),
response_format: ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None = None,
state_schema: type[AgentState[ResponseT]] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
name: str | None = None,
cache: BaseCache[Any] | None = None
) -> CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]The agent node calls the language model with the messages list (after applying
the system prompt). If the resulting AIMessage
contains tool_calls, the graph will then call the tools. The tools node executes
the tools and adds the responses to the messages list as
ToolMessage objects. The agent node then calls
the language model again. The process repeats until no more tool_calls are present
in the response. The agent then returns the full list of messages.
Example:
from langchain.agents import create_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_agent(
model="anthropic:claude-sonnet-4-5-20250929",
tools=[check_weather],
system_prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)| Name | Type | Description |
|---|---|---|
model* | str | BaseChatModel | The language model for the agent. Can be a string identifier (e.g., For a full list of supported model strings, see
Tip See the Models docs for more information. |
tools | Sequence[BaseTool | Callable[..., Any] | dict[str, Any]] | None | Default: NoneA list of tools, If Tip See the Tools docs for more information. |
system_prompt | str | SystemMessage | None | Default: NoneAn optional system prompt for the LLM. Can be a |
middleware | Sequence[AgentMiddleware[StateT_co, ContextT]] | Default: ()A sequence of middleware instances to apply to the agent. Middleware can intercept and modify agent behavior at various stages. Tip See the Middleware docs for more information. |
response_format | ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None | Default: NoneAn optional configuration for structured responses. Can be a If provided, the agent will handle structured output during the conversation flow. Raw schemas will be wrapped in an appropriate strategy based on model capabilities. Tip See the Structured output docs for more information. |
state_schema | type[AgentState[ResponseT]] | None | Default: NoneAn optional When provided, this schema is used instead of Generally, it's recommended to use |
context_schema | type[ContextT] | None | Default: NoneAn optional schema for runtime context. |
checkpointer | Checkpointer | None | Default: NoneAn optional checkpoint saver object. Used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation). |
store | BaseStore | None | Default: NoneAn optional store object. Used for persisting data across multiple threads (e.g., multiple conversations / users). |
interrupt_before | list[str] | None | Default: NoneAn optional list of node names to interrupt before. Useful if you want to add a user confirmation or other interrupt before taking an action. |
interrupt_after | list[str] | None | Default: NoneAn optional list of node names to interrupt after. Useful if you want to return directly or run additional processing on an output. |
debug | bool | Default: FalseWhether to enable verbose logging for graph execution. When enabled, prints detailed information about each node execution, state updates, and transitions during agent runtime. Useful for debugging middleware behavior and understanding agent execution flow. |
name | str | None | Default: NoneAn optional name for the This name will be automatically used when adding the agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems. |
cache | BaseCache[Any] | None | Default: NoneAn optional |