| Name | Type |
|---|---|
| base_system_prompt | str | None |
| system_prompt_suffix | str | None |
| tool_description_overrides | Mapping[str, str] |
| excluded_tools | frozenset[str] |
| excluded_middleware | frozenset[type[AgentMiddleware] | str] |
| extra_middleware | Sequence[AgentMiddleware] | Callable[[], Sequence[AgentMiddleware]] |
| general_purpose_subagent | GeneralPurposeSubagentProfile | None |
CUSTOM slot in the prompt assembly order — completely replaces
BASE_AGENT_PROMPT as the base prompt when set.
None (the default) means use BASE_AGENT_PROMPT unchanged.
If both base_system_prompt and system_prompt_suffix are set, the
suffix is appended to this custom base. A caller-supplied
system_prompt= is still placed before this base — see
create_deep_agent's system_prompt parameter or
Prompt assembly
for the full assembly order.
Most profiles only set system_prompt_suffix to layer model-tuning
guidance on top of the SDK base.
SUFFIX slot in the prompt assembly order — text appended to
the assembled base system prompt.
Always sits last (after BASE or CUSTOM) so model-tuning guidance
lands closest to the conversation history. None (the default)
means no suffix.
Applied uniformly to every assembled stack that consults this
profile: the main agent, declarative subagents whose model resolves
to this profile, and the auto-added general-purpose subagent. Each
stack receives the suffix on top of its own base prompt
(BASE_AGENT_PROMPT, the subagent's authored prompt, and the GP
base respectively).
See create_deep_agent's system_prompt parameter or
Prompt assembly
for how SUFFIX composes with caller-supplied prompts and
base_system_prompt.
Runtime configuration for deep agent behavior.
deepagents.profiles exposes beta APIs that may receive minor changes in
future releases. Refer to the versioning documentation
for more details.
A HarnessProfile describes prompt-assembly, tool visibility, middleware,
and default-subagent adjustments applied by create_deep_agent once a
chat model has been constructed. Profiles are registered via
register_harness_profile under a provider key ("openai") or a full
provider:model key ("openai:gpt-5.4").
This complements ProviderProfile, which controls the model-construction
phase (e.g. init_chat_model kwargs, pre-init side effects). Concerns
that shape how the model is built belong in ProviderProfile; concerns
that shape how the agent runs belong here.
For YAML/JSON-backed profiles, use HarnessProfileConfig, which contains
only the declarative subset and can be passed directly to
register_harness_profile.
The extra_middleware field expects
langchain.agents.middleware.types.AgentMiddleware instances or a
factory returning a sequence of them.
Example:
Minimal — append a model-specific system-prompt suffix:
from deepagents import HarnessProfile, register_harness_profile
register_harness_profile(
"openai:gpt-5.4",
HarnessProfile(system_prompt_suffix="Think step by step."),
)
Richer — combine prompt tuning, tool exclusion, and a tweak to the auto-added general-purpose subagent:
from deepagents import (
GeneralPurposeSubagentProfile,
HarnessProfile,
register_harness_profile,
)
register_harness_profile(
"openai:gpt-5.4",
HarnessProfile(
system_prompt_suffix="Respond in under 100 words.",
excluded_tools=frozenset({"execute"}),
general_purpose_subagent=GeneralPurposeSubagentProfile(enabled=False),
),
)