Agent management and creation.
Accepted internal output modes for CLI subcommands.
Default agent / assistant identifier when no -a flag is given.
Default LangGraph runnable config.
Sets recursion_limit to 1000 to accommodate deeply nested agent graphs without
hitting the default LangGraph ceiling.
When True, compact_conversation requires HITL approval like other gated tools.
Matches the ### Model Identity section in the system prompt, up to the
next heading or end of string.
Get the default coding agent instructions.
These are the immutable base instructions that cannot be modified by the agent. Long-term memory (AGENTS.md) is handled separately by the middleware.
Get the glyph set for the current charset mode.
Get the default working directory for a given sandbox provider.
Read the server project context from environment transport data.
List subagents from user and/or project directories.
Scans for subagent definitions in the provided directories. Project subagents override user subagents with the same name.
Check a URL for suspicious Unicode and domain spoofing patterns.
Detect deceptive or hidden Unicode code points in text.
Join safety warnings into a display string with overflow indicator.
Render hidden Unicode characters as explicit markers.
Example output: abc<U+202E RIGHT-TO-LEFT OVERRIDE>def.
Remove known dangerous/invisible Unicode characters from text.
Summarize Unicode issues for warning messages.
Deduplicates by code point. When more than max_items unique entries exist,
the summary is truncated with a +N more entries suffix.
Load async subagent definitions from config.toml.
Reads the [async_subagents] section where each sub-table defines a remote
LangGraph deployment:
[async_subagents.researcher]
description = "Research agent"
url = "https://my-deployment.langsmith.dev"
graph_id = "agent"
Return a sorted list of available agent names from ~/.deepagents/.
Scans the user's .deepagents directory and returns each real
subdirectory found there. Symlinks excluded so a dangling link does not
masquerade as an agent. Dot-prefixed entries (e.g., .state/) are
skipped so internal app state never appears as an agent.
Filesystem errors (missing parent, permission denied, broken entries) are logged and surfaced as an empty list rather than raised — the caller shows an empty modal instead of crashing mid-render.
List all available agents.
Reset an agent to default or copy from another agent.
Build the ### Model Identity section for the system prompt.
Get the base system prompt for the agent.
Loads the base system prompt template from system_prompt.md and
interpolates dynamic sections (model identity, working directory,
skills path, execution mode, and todo-list guidance for
interactive vs headless).
Create a CLI-configured agent with flexible options.
This is the main entry point for creating a Deep Agents Code agent, usable both internally and from external code (e.g., benchmarking frameworks).
Metadata for a configured MCP server and its tools.
Swap the model or per-call settings from runtime.context.
Reads two optional keys from the runtime context dict:
'model' — a provider:model spec (e.g. "openai:gpt-5").
When present and different from the current model, the request is
re-routed to the new model.'model_params' — a dict of extra model settings (e.g.
{"temperature": 0}) that are shallow-merged into the
request's model_settings.This middleware is typically the outermost layer so it intercepts every
model call before provider-specific middleware (like
AnthropicPromptCachingMiddleware) runs.
Inject local context (git state, project structure, etc.) into the system prompt.
Runs a bash detection script via backend.execute() on first interaction
and again after each summarization event, stores the result in state, and
appends it to the system prompt on every model call.
Because the script runs inside the backend, it works for both local shells and remote sandboxes.
Explicit user/project path context for project-sensitive behavior.
Validate shell commands against an allow-list without HITL interrupts.
When the agent invokes the execute shell tool, this middleware checks
the command against the configured allow-list before execution.
Rejected commands are returned as error ToolMessage objects — the
graph never pauses, so LangSmith traces stay as a single continuous
run.
Use this middleware in non-interactive mode to avoid the interrupt/resume cycle that fragments traces.