# create_cli_agent

> **Function** in `deepagents_cli`

📖 [View in docs](https://reference.langchain.com/python/deepagents-cli/agent/create_cli_agent)

Create a CLI-configured agent with flexible options.

This is the main entry point for creating a deepagents CLI agent, usable
both internally and from external code (e.g., benchmarking frameworks).

## Signature

```python
create_cli_agent(
    model: str | BaseChatModel,
    assistant_id: str,
    *,
    tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
    sandbox: SandboxBackendProtocol | None = None,
    sandbox_type: str | None = None,
    system_prompt: str | None = None,
    interactive: bool = True,
    auto_approve: bool = False,
    interrupt_shell_only: bool = False,
    shell_allow_list: list[str] | None = None,
    enable_ask_user: bool = True,
    enable_memory: bool = True,
    enable_skills: bool = True,
    enable_shell: bool = True,
    checkpointer: BaseCheckpointSaver | None = None,
    mcp_server_info: list[MCPServerInfo] | None = None,
    cwd: str | Path | None = None,
    project_context: ProjectContext | None = None,
    async_subagents: list[AsyncSubAgent] | None = None,
) -> tuple[Pregel, CompositeBackend]
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `model` | `str \| BaseChatModel` | Yes | LLM model to use (e.g., `'anthropic:claude-sonnet-4-6'`) |
| `assistant_id` | `str` | Yes | Agent identifier for memory/state storage |
| `tools` | `Sequence[BaseTool \| Callable \| dict[str, Any]] \| None` | No | Additional tools to provide to agent (default: `None`) |
| `sandbox` | `SandboxBackendProtocol \| None` | No | Optional sandbox backend for remote execution (e.g., `ModalSandbox`).  If `None`, uses local filesystem + shell. (default: `None`) |
| `sandbox_type` | `str \| None` | No | Type of sandbox provider (`'agentcore'`, `'daytona'`, `'langsmith'`, `'modal'`, `'runloop'`). Used for system prompt generation. (default: `None`) |
| `system_prompt` | `str \| None` | No | Override the default system prompt.  If `None`, generates one based on `sandbox_type`, `assistant_id`, and `interactive`. (default: `None`) |
| `interactive` | `bool` | No | When `False`, the auto-generated system prompt is tailored for headless non-interactive execution. Ignored when `system_prompt` is provided explicitly. (default: `True`) |
| `auto_approve` | `bool` | No | If `True`, no tools trigger human-in-the-loop interrupts — all calls (shell execution, file writes/edits, web search, URL fetch) run automatically.  If `False`, tools pause for user confirmation via the approval menu. See `_add_interrupt_on` for the full list of gated tools. (default: `False`) |
| `interrupt_shell_only` | `bool` | No | If `True`, all HITL interrupts are disabled; shell commands are validated inline by `ShellAllowListMiddleware` against the configured allow-list instead.  Used in non-interactive mode with a restrictive shell allow-list to avoid splitting traces into multiple LangSmith runs.  Has no effect when `auto_approve` is `True` (interrupts are already disabled) or when `shell_allow_list` is `SHELL_ALLOW_ALL`. (default: `False`) |
| `shell_allow_list` | `list[str] \| None` | No | Explicit restrictive shell allow-list forwarded from the CLI process. When provided (and `interrupt_shell_only` is `True`), used directly instead of reading `settings.shell_allow_list` (which may not be set in the server subprocess environment). (default: `None`) |
| `enable_ask_user` | `bool` | No | Enable `AskUserMiddleware` so the agent can ask clarifying questions.  Disabled in non-interactive mode. (default: `True`) |
| `enable_memory` | `bool` | No | Enable `MemoryMiddleware` for persistent memory (default: `True`) |
| `enable_skills` | `bool` | No | Enable `SkillsMiddleware` for custom agent skills (default: `True`) |
| `enable_shell` | `bool` | No | Enable shell execution via `LocalShellBackend` (only in local mode). When enabled, the `execute` tool is available. (default: `True`) |
| `checkpointer` | `BaseCheckpointSaver \| None` | No | Optional checkpointer for session persistence. When `None`, the graph is compiled without a checkpointer. (default: `None`) |
| `mcp_server_info` | `list[MCPServerInfo] \| None` | No | MCP server metadata to surface in the system prompt. (default: `None`) |
| `cwd` | `str \| Path \| None` | No | Override the working directory for the agent's filesystem backend and system prompt. (default: `None`) |
| `project_context` | `ProjectContext \| None` | No | Explicit project path context for project-sensitive behavior such as project `AGENTS.md` files, skills, subagents, and MCP trust. (default: `None`) |
| `async_subagents` | `list[AsyncSubAgent] \| None` | No | Remote LangGraph deployments to expose as async subagent tools.  Loaded from `[async_subagents]` in `config.toml` or passed directly. (default: `None`) |

## Returns

`tuple[Pregel, CompositeBackend]`

2-tuple of `(agent_graph, backend)`

- `agent_graph`: Configured LangGraph Pregel instance ready
    for execution
- `composite_backend`: `CompositeBackend` for file operations

---

[View source on GitHub](https://github.com/langchain-ai/deepagents/blob/88c2b5cb874dc1d093acf54d2a967ba6e085c99b/libs/cli/deepagents_cli/agent.py#L891)