# create_deep_agent

> **Function** in `deepagents`

📖 [View in docs](https://reference.langchain.com/python/deepagents/graph/create_deep_agent)

Create a Deep Agent.

!!! warning "Deep Agents require a LLM that supports tool calling!"

By default, this agent has access to the following tools:

- `write_todos`: manage a todo list
- `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`: file operations
- `execute`: run shell commands
- `task`: call subagents

The `execute` tool allows running shell commands if the backend implements `SandboxBackendProtocol`.
For non-sandbox backends, the `execute` tool will return an error message.

## Signature

```python
create_deep_agent(
    model: str | BaseChatModel | None = None,
    tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
    *,
    system_prompt: str | SystemMessage | None = None,
    middleware: Sequence[AgentMiddleware] = (),
    subagents: Sequence[SubAgent | CompiledSubAgent | AsyncSubAgent] | None = None,
    skills: list[str] | None = None,
    memory: list[str] | None = None,
    permissions: list[FilesystemPermission] | None = None,
    response_format: ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None = None,
    context_schema: type[ContextT] | None = None,
    checkpointer: Checkpointer | None = None,
    store: BaseStore | None = None,
    backend: BackendProtocol | BackendFactory | None = None,
    interrupt_on: dict[str, bool | InterruptOnConfig] | None = None,
    debug: bool = False,
    name: str | None = None,
    cache: BaseCache | None = None,
) -> CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `model` | `str \| BaseChatModel \| None` | No | The model to use.  Defaults to `claude-sonnet-4-6`.  Accepts a `provider:model` string (e.g., `openai:gpt-5`); see [`init_chat_model`][langchain.chat_models.init_chat_model(model_provider)] for supported values. You can also pass a pre-initialized [`BaseChatModel`][langchain.chat_models.BaseChatModel] instance directly.  !!! note "OpenAI Models and Data Retention"      If an `openai:` model is used, the agent will use the OpenAI     Responses API by default. To use OpenAI chat completions     instead, initialize the model with     `init_chat_model("openai:...", use_responses_api=False)` and     pass the initialized model instance here.      To disable data retention with the Responses API, use     `init_chat_model("openai:...", use_responses_api=True, store=False, include=["reasoning.encrypted_content"])`     and pass the initialized model instance here. (default: `None`) |
| `tools` | `Sequence[BaseTool \| Callable \| dict[str, Any]] \| None` | No | Additional tools the agent should have access to.  These are merged with the built-in tool suite listed above (`write_todos`, filesystem tools, `execute`, and `task`). (default: `None`) |
| `system_prompt` | `str \| SystemMessage \| None` | No | Custom system instructions to prepend before the base Deep Agent prompt.  If a string, it's concatenated with the base prompt. (default: `None`) |
| `middleware` | `Sequence[AgentMiddleware]` | No | Additional middleware to apply after the base stack but before the tail middleware. The full ordering is:  Base stack:  - `TodoListMiddleware` - `SkillsMiddleware` (if `skills` is provided) - `FilesystemMiddleware` - `SubAgentMiddleware` - `SummarizationMiddleware` - `PatchToolCallsMiddleware` - `AsyncSubAgentMiddleware` (if async `subagents` are provided)  *User middleware is inserted here.*  Tail stack:  - `AnthropicPromptCachingMiddleware` - `MemoryMiddleware` (if `memory` is provided) - `HumanInTheLoopMiddleware` (if `interrupt_on` is provided) - `_PermissionMiddleware` (if permission rules are present, always last) (default: `()`) |
| `subagents` | `Sequence[SubAgent \| CompiledSubAgent \| AsyncSubAgent] \| None` | No | Subagent specs available to the main agent.  This collection supports three forms:  - [`SubAgent`][deepagents.middleware.subagents.SubAgent]: A declarative synchronous subagent spec. - [`CompiledSubAgent`][deepagents.middleware.subagents.CompiledSubAgent]: A pre-compiled runnable subagent. - [`AsyncSubAgent`][deepagents.middleware.async_subagents.AsyncSubAgent]: A remote/background subagent spec.  `SubAgent` entries are invoked through the `task` tool. They should provide `name`, `description`, and `system_prompt`, and may also override `tools`, `model`, `middleware`, `interrupt_on`, and `skills`. See `interrupt_on` below for inheritance and override behavior.  `CompiledSubAgent` entries are also exposed through the `task` tool, but provide a pre-built `runnable` instead of a declarative prompt and tool configuration.  `AsyncSubAgent` entries are identified by their async-subagent fields (`graph_id`, and optionally `url`/`headers`) and are routed into `AsyncSubAgentMiddleware` instead of `SubAgentMiddleware`. They should provide `name`, `description`, and `graph_id`, and may optionally include `url` and `headers`. These subagents run as background tasks and expose the async subagent tools for launching, checking, updating, cancelling, and listing tasks.  If no subagent named `general-purpose` is provided, a default general-purpose synchronous subagent is added automatically. (default: `None`) |
| `skills` | `list[str] \| None` | No | List of skill source paths (e.g., `["/skills/user/", "/skills/project/"]`).  Paths must be specified using POSIX conventions (forward slashes) and are relative to the backend's root. When using `StateBackend` (default), provide skill files via `invoke(files={...})`. With `FilesystemBackend`, skills are loaded from disk relative to the backend's `root_dir`. Later sources override earlier ones for skills with the same name (last one wins). (default: `None`) |
| `memory` | `list[str] \| None` | No | List of memory file paths (`AGENTS.md` files) to load (e.g., `["/memory/AGENTS.md"]`).  Display names are automatically derived from paths.  Memory is loaded at agent startup and added into the system prompt. (default: `None`) |
| `response_format` | `ResponseFormat[ResponseT] \| type[ResponseT] \| dict[str, Any] \| None` | No | A structured output response format to use for the agent. (default: `None`) |
| `context_schema` | `type[ContextT] \| None` | No | Schema class that defines immutable run-scoped context.  Passed through to [`create_agent`][langchain.agents.create_agent]. (default: `None`) |
| `checkpointer` | `Checkpointer \| None` | No | Optional `Checkpointer` for persisting agent state between runs.  Passed through to [`create_agent`][langchain.agents.create_agent]. (default: `None`) |
| `store` | `BaseStore \| None` | No | Optional store for persistent storage (required if backend uses `StoreBackend`).  Passed through to [`create_agent`][langchain.agents.create_agent]. (default: `None`) |
| `backend` | `BackendProtocol \| BackendFactory \| None` | No | Optional backend for file storage and execution.  Pass a `Backend` instance (e.g. `StateBackend()`).  For execution support, use a backend that implements `SandboxBackendProtocol`. (default: `None`) |
| `permissions` | `list[FilesystemPermission] \| None` | No | List of ``FilesystemPermission`` rules for the main agent and its subagents.  Rules are evaluated in declaration order; the first match wins. If no rule matches, the call is allowed.  Subagents inherit these rules unless they specify their own `permissions` field, which replaces the parent's rules entirely.  `_PermissionMiddleware` is appended last in the stack so it sees all tools (including those injected by other middleware). (default: `None`) |
| `interrupt_on` | `dict[str, bool \| InterruptOnConfig] \| None` | No | Mapping of tool names to interrupt configs.  Pass to pause agent execution at specified tool calls for human approval or modification.  This config always applies to the main agent.  For subagents: - Declarative `SubAgent` specs inherit the top-level `interrupt_on`     config by default. - If a declarative `SubAgent` provides its own `interrupt_on`, that     subagent-specific config overrides the inherited     top-level config. - `CompiledSubAgent` runnables do not inherit top-level     `interrupt_on`; configure human-in-the-loop behavior inside the     compiled runnable itself. - Remote `AsyncSubAgent` specs do not inherit top-level     `interrupt_on`; configure any approval behavior on the remote     subagent itself.  For example, `interrupt_on={"edit_file": True}` pauses before every edit. (default: `None`) |
| `debug` | `bool` | No | Whether to enable debug mode.  Passed through to [`create_agent`][langchain.agents.create_agent]. (default: `False`) |
| `name` | `str \| None` | No | The name of the agent.  Passed through to [`create_agent`][langchain.agents.create_agent]. (default: `None`) |
| `cache` | `BaseCache \| None` | No | The cache to use for the agent.  Passed through to [`create_agent`][langchain.agents.create_agent]. (default: `None`) |

## Returns

`CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]`

A configured Deep Agent.

---

[View source on GitHub](https://github.com/langchain-ai/deepagents/blob/b710a69b12e49479045eaa54dfb709326473500b/libs/deepagents/deepagents/graph.py#L109)