Create a Deep Agent.
By default, this agent has access to the following tools:
write_todos: manage a todo listls, read_file, write_file, edit_file, glob, grep: file operationsexecute: run shell commandstask: call subagentsThe execute tool allows running shell commands if the backend implements SandboxBackendProtocol.
For non-sandbox backends, the execute tool will return an error message.
create_deep_agent(
model: str | BaseChatModel | None = None,
tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
*,
system_prompt: str | SystemMessage | None = None,
middleware: Sequence[AgentMiddleware] = (),
subagents: Sequence[SubAgent | CompiledSubAgent | AsyncSubAgent] | None = None,
skills: list[str] | None = None,
memory: list[str] | None = None,
response_format: ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
backend: BackendProtocol | BackendFactory | None = None,
interrupt_on: dict[str, bool | InterruptOnConfig] | None = None,
debug: bool = False,
name: str | None = None,
cache: BaseCache | None = None
) -> CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]| Name | Type | Description |
|---|---|---|
model | str | BaseChatModel | None | Default: NoneThe model to use. Defaults to Accepts a OpenAI Models and Data Retention If an To disable data retention with the Responses API, use
|
tools | Sequence[BaseTool | Callable | dict[str, Any]] | None | Default: NoneAdditional tools the agent should have access to. These are merged with the built-in tool suite listed above
( |
system_prompt | str | SystemMessage | None | Default: NoneCustom system instructions to prepend before the base Deep Agent prompt. If a string, it's concatenated with the base prompt. |
middleware | Sequence[AgentMiddleware] | Default: ()Additional middleware to apply after the base stack but before the tail middleware. The full ordering is: Base stack:
User middleware is inserted here. Tail stack:
|
subagents | Sequence[SubAgent | CompiledSubAgent | AsyncSubAgent] | None | Default: NoneSubagent specs available to the main agent. This collection supports three forms:
If no subagent named |
skills | list[str] | None | Default: NoneList of skill source paths (e.g., Paths must be specified using POSIX conventions (forward slashes)
and are relative to the backend's root. When using
|
memory | list[str] | None | Default: NoneList of memory file paths ( Display names are automatically derived from paths. Memory is loaded at agent startup and added into the system prompt. |
response_format | ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None | Default: NoneA structured output response format to use for the agent. |
context_schema | type[ContextT] | None | Default: NoneSchema class that defines immutable run-scoped context. Passed through to |
checkpointer | Checkpointer | None | Default: NoneOptional Passed through to |
store | BaseStore | None | Default: NoneOptional store for persistent storage (required if backend
uses Passed through to |
backend | BackendProtocol | BackendFactory | None | Default: NoneOptional backend for file storage and execution. Pass a For execution support, use a backend that
implements |
interrupt_on | dict[str, bool | InterruptOnConfig] | None | Default: NoneMapping of tool names to interrupt configs. Pass to pause agent execution at specified tool calls for human approval or modification. This config always applies to the main agent. For subagents:
For example, |
debug | bool | Default: FalseWhether to enable debug mode. Passed through to |
name | str | None | Default: NoneThe name of the agent. Passed through to |
cache | BaseCache | None | Default: NoneThe cache to use for the agent. Passed through to |