Deep Agents package.
Create a deep agent.
By default, this agent has access to the following tools:
write_todos: manage a todo listls, read_file, write_file, edit_file, glob, grep: file operationsexecute: run shell commandstask: call subagentsThe execute tool allows running shell commands if the backend implements
SandboxBackendProtocol.
For non-sandbox backends, the execute tool will return an error message.
Specification for an async subagent running on a remote Agent Protocol server.
Async subagents connect to any Agent Protocol-compliant server via the LangGraph SDK. They run as background tasks that the main agent can monitor and update.
Compatible with LangGraph Platform (managed) and self-hosted servers.
Authentication for LangGraph Platform is handled automatically by the SDK
via environment variables (LANGGRAPH_API_KEY, LANGSMITH_API_KEY, or
LANGCHAIN_API_KEY). For self-hosted servers, pass custom auth via
headers.
Middleware for async subagents running on remote Agent Protocol servers.
This middleware adds tools for launching, monitoring, and updating
background tasks on remote Agent Protocol servers. Unlike the synchronous
SubAgentMiddleware, async subagents return immediately with a task ID,
allowing the main agent to continue working while subagents execute.
Works with any Agent Protocol-compliant server — LangGraph Platform (managed) or self-hosted (e.g. a FastAPI server implementing the Agent Protocol spec).
Task IDs are persisted in the agent state under async_tasks so they
survive context compaction/offloading and can be accessed programmatically.
Middleware for providing filesystem and optional execution tools to an agent.
This middleware adds filesystem tools to the agent: ls, read_file, write_file,
edit_file, glob, and grep.
Files can be stored using any backend that implements the
BackendProtocol.
If the backend implements
SandboxBackendProtocol,
an execute tool is also added for running shell commands.
This middleware also automatically evicts large tool results to the file system when they exceed a token threshold, preventing context window saturation.
A single access rule for filesystem operations.
Middleware for loading agent memory from AGENTS.md files.
Loads memory content from configured sources and injects into the system prompt.
Supports multiple sources that are combined together.
A pre-compiled agent spec.
The runnable's state schema must include a 'messages' key.
This is required for the subagent to communicate results back to the main agent.
When the subagent completes, the final message in the 'messages' list will be
extracted and returned as a ToolMessage to the parent agent.
Specification for an agent.
When using create_deep_agent, subagents automatically receive a default middleware
stack (TodoListMiddleware, FilesystemMiddleware, SummarizationMiddleware, etc.) before
any custom middleware specified in this spec.
Middleware for providing subagents to an agent via a task tool.
This middleware adds a task tool to the agent that can be used to invoke subagents.
Subagents are useful for handling complex tasks that require multiple steps, or tasks
that require a lot of context to resolve.
A chief benefit of subagents is that they can handle multi-step tasks, and then return a clean, concise response to the main agent.
Subagents are also great for different domains of expertise that require a narrower subset of tools and focus.
Edits applied to the auto-added general-purpose subagent.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
These settings only affect the default subagent that create_deep_agent
inserts when the caller does not explicitly provide a subagent named
general-purpose.
Runtime configuration for deep agent behavior.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
A HarnessProfile describes prompt-assembly, tool visibility, middleware,
and default-subagent adjustments applied by create_deep_agent once a
chat model has been constructed. Profiles are registered via
register_harness_profile under a provider key ("openai") or a full
provider:model key ("openai:gpt-5.4").
This complements ProviderProfile, which controls the model-construction
phase (e.g. init_chat_model kwargs, pre-init side effects). Concerns
that shape how the model is built belong in ProviderProfile; concerns
that shape how the agent runs belong here.
For YAML/JSON-backed profiles, use HarnessProfileConfig, which contains
only the declarative subset and can be passed directly to
register_harness_profile.
The extra_middleware field expects
langchain.agents.middleware.types.AgentMiddleware instances or a
factory returning a sequence of them.
Declarative harness-profile config for YAML/JSON-backed profiles.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
A HarnessProfileConfig contains the file-friendly subset of harness
settings: plain strings, bools, lists, and nested dicts that can be loaded
from YAML or JSON. For in-code/runtime-only adjustments such as
extra_middleware or class-form excluded_middleware, use
HarnessProfile instead.
excluded_middleware in config files currently only accepts plain
middleware-name strings matched against AgentMiddleware.name. A
future revision may add explicit class-path (module:Class) entries
for excluding middleware whose class isn't part of the public import
surface; until then, exclude such middleware via its .name (using
serialized_name for stable public aliases) or stay on the runtime
HarnessProfile and pass the class directly.
Config objects may be passed directly to register_harness_profile; the
helper converts them to runtime HarnessProfile objects automatically.
Declarative configuration for constructing a chat model.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
A ProviderProfile describes provider- or model-specific kwargs,
pre-initialization side effects, and runtime-derived kwargs that should be
applied when resolve_model turns a string spec (e.g. "openai:gpt-5.4")
into a BaseChatModel. Profiles are registered via
register_provider_profile under a provider key ("openai") or a full
provider:model key ("openai:gpt-5.4").
Profiles handle model-construction concerns only — things that shape how
init_chat_model assembles the client instance. Typical examples:
constructor kwargs like use_responses_api, temperature, max_tokens,
or base_url; provider-specific headers such as OpenRouter app
attribution; pre-construction checks like minimum-version enforcement;
and env-var-aware defaults.
Runtime and harness behavior — system-prompt assembly, tool description
overrides, excluded tools, extra middleware, general-purpose subagent
configuration — belongs in HarnessProfile, the separate harness
profile system consumed by create_deep_agent, not here.
Primary graph assembly module for Deep Agents.
Provides create_deep_agent, the main entry
point for constructing a fully configured deep agent with planning, filesystem,
subagent, and summarization middleware.
Middleware for the Deep Agents agent.
The LLM receives tools through two paths:
tools parameter to create_deep_agent(). The CLI uses this path for
lightweight, consumer-specific tools.Both are merged by create_deep_agent() into the final tool set the LLM sees.
Middleware subclasses AgentMiddleware, overriding its wrap_model_call()
hook that intercepts every LLM request before it is sent. This lets
middleware:
FilesystemMiddleware removes the
execute tool at call-time when the resolved backend doesn't support it.MemoryMiddleware and
SkillsMiddleware inject relevant instructions into the system message on
every call so the LLM knows how to use the tools they provide.SummarizationMiddleware counts tokens,
truncates old tool arguments, and replaces history with summaries when the
context window fills up.A plain tool function in a tools=[] list cannot do any of this -- it is
only invoked by the LLM, not before the LLM call.
Use middleware when the tool needs to:
Use a plain tool when:
Public beta APIs for model and harness profiles.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
Exposes the public ProviderProfile, HarnessProfile, and
HarnessProfileConfig APIs for customizing how resolve_model constructs chat
models and how create_deep_agent shapes agent runtime behavior.
Registration helpers are additive: re-registering under an existing key merges on top of the prior registration.
Memory backends for pluggable file storage.
Register a harness profile for a provider or specific model.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
Accepts either a runtime HarnessProfile or a declarative
HarnessProfileConfig. Config objects are converted to runtime profiles
at registration time so YAML/JSON-backed callers do not need a separate
manual conversion step.
Registrations are additive: if a profile is already registered under
key (including a built-in profile loaded during lazy bootstrap), the new
profile is merged on top rather than replacing it. The incoming profile's
fields win on conflicts; unspecified fields inherit from the existing
profile. Excluded-tool sets union, middleware sequences merge by type, and
general_purpose_subagent settings merge field-wise.
To extend an existing registration, call register_harness_profile again
under the same key:
from deepagents import HarnessProfile, register_harness_profile
# Layer a system-prompt suffix on top of the previous registration.
register_harness_profile(
"openai:gpt-5.4",
HarnessProfile(system_prompt_suffix="Respond in under 100 words."),
)Register a ProviderProfile for a provider or specific model.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
Registrations are additive: if a profile is already registered under
key (including a built-in profile loaded during lazy bootstrap), the new
profile is merged on top rather than replacing it. The incoming profile's
fields win on conflicts; unspecified fields inherit from the existing
profile.
pre_init callables chain (existing runs first), and init_kwargs_factory
callables chain — both factories are invoked at every resolution (base
first, then override) and their outputs merge with the override's values
winning on shared keys.
To layer additional kwargs onto a built-in profile, register under the same provider key. To override a built-in default (e.g. disable the OpenAI Responses API), set the conflicting key explicitly:
from deepagents import ProviderProfile, register_provider_profile
# Adds temperature alongside the built-in `use_responses_api=True`.
register_provider_profile("openai", ProviderProfile(init_kwargs={"temperature": 0}))
# Explicitly disables Responses API for OpenAI. (This will break usage,
# this example is purely illustrative.)
register_provider_profile(
"openai",
ProviderProfile(init_kwargs={"use_responses_api": False}),
)