# get_system_prompt

> **Function** in `deepagents_cli`

📖 [View in docs](https://reference.langchain.com/python/deepagents-cli/agent/get_system_prompt)

Get the base system prompt for the agent.

Loads the base system prompt template from `system_prompt.md` and
interpolates dynamic sections (model identity, working directory,
skills path, execution mode, and todo-list guidance for
interactive vs headless).

## Signature

```python
get_system_prompt(
    assistant_id: str,
    sandbox_type: str | None = None,
    *,
    interactive: bool = True,
    cwd: str | Path | None = None,
) -> str
```

## Description

**Example:**

```txt
You are running as model {MODEL} (provider: {PROVIDER}).

Your context window is {CONTEXT_WINDOW} tokens.

... {CONDITIONAL SECTIONS} ...
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `assistant_id` | `str` | Yes | The agent identifier for path references |
| `sandbox_type` | `str \| None` | No | Type of sandbox provider (`'agentcore'`, `'daytona'`, `'langsmith'`, `'modal'`, `'runloop'`).  If `None`, agent is operating in local mode. (default: `None`) |
| `interactive` | `bool` | No | When `False`, the prompt is tailored for headless non-interactive execution (no human in the loop). (default: `True`) |
| `cwd` | `str \| Path \| None` | No | Override the working directory shown in the prompt. (default: `None`) |

## Returns

`str`

The system prompt string

---

[View source on GitHub](https://github.com/langchain-ai/deepagents/blob/829909166606f8a9d9571b00da725845bad08da7/libs/cli/deepagents_cli/agent.py#L472)