Create a multi-agent supervisor.
create_supervisor(
agents: list[Pregel],
*,
model: LanguageModelLike,
tools: list[BaseTool | Callable] | ToolNode | None = None,
prompt: Prompt | None = None,
response_format: Optional[Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]] = None,
pre_model_hook: Optional[RunnableLike] = None,
post_model_hook: Optional[RunnableLike] = None,
parallel_tool_calls: bool = False,
state_schema: StateSchemaType | None = None,
context_schema: Type[Any] | None = None,
output_mode: OutputMode = 'last_message',
add_handoff_messages: bool = True,
handoff_tool_prefix: Optional[str] = None,
add_handoff_back_messages: Optional[bool] = None,
supervisor_name: str = 'supervisor',
include_agent_name: AgentNameMode | None = None,
**deprecated_kwargs: Unpack[DeprecatedKwargs] = {}
) -> StateGraphExample:
from langchain_openai import ChatOpenAI
from langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agent
# Create specialized agents
def add(a: float, b: float) -> float:
'''Add two numbers.'''
return a + b
def web_search(query: str) -> str:
'''Search the web for information.'''
return 'Here are the headcounts for each of the FAANG companies in 2024...'
math_agent = create_react_agent(
model="openai:gpt-4o",
tools=[add],
name="math_expert",
)
research_agent = create_react_agent(
model="openai:gpt-4o",
tools=[web_search],
name="research_expert",
)
# Create supervisor workflow
workflow = create_supervisor(
[research_agent, math_agent],
model=ChatOpenAI(model="gpt-4o"),
)
# Compile and run
app = workflow.compile()
result = app.invoke({
"messages": [
{
"role": "user",
"content": "what's the combined headcount of the FAANG companies in 2024?"
}
]
})| Name | Type | Description |
|---|---|---|
agents* | list[Pregel] | List of agents to manage. An agent can be a LangGraph |
model* | LanguageModelLike | Language model to use for the supervisor |
tools | list[BaseTool | Callable] | ToolNode | None | Default: NoneTools to use for the supervisor |
prompt | Prompt | None | Default: NoneOptional prompt to use for the supervisor. Can be one of:
|
response_format | Optional[Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]] | Default: NoneAn optional schema for the final supervisor output. If provided, output will be formatted to match the given schema and returned in the If not provided, Can be passed in as:
Important
Note
You can use the prebuilt |
pre_model_hook | Optional[RunnableLike] | Default: NoneAn optional node to add before the LLM node in the supervisor agent (i.e., the node that calls the LLM). Useful for managing long message histories (e.g., message trimming, summarization, etc.). Pre-model hook must be a callable or a runnable that takes in current graph state and returns a state update in the form of
Important At least one of Warning If you are returning |
post_model_hook | Optional[RunnableLike] | Default: NoneAn optional node to add after the LLM node in the supervisor agent (i.e., the node that calls the LLM). Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing. Post-model hook must be a callable or a runnable that takes in current graph state and returns a state update. |
parallel_tool_calls | bool | Default: FalseWhether to allow the supervisor LLM to call tools in parallel (only OpenAI and Anthropic). Use this to control whether the supervisor can hand off to multiple agents at once. If If Important This is currently supported only by OpenAI and Anthropic models. To control parallel tool calling for other providers, add explicit instructions for tool use to the system prompt. |
state_schema | StateSchemaType | None | Default: NoneState schema to use for the supervisor graph. |
context_schema | Type[Any] | None | Default: NoneSpecifies the schema for the context object that will be passed to the workflow. |
output_mode | OutputMode | Default: 'last_message'Mode for adding managed agents' outputs to the message history in the multi-agent workflow. Can be one of:
|
add_handoff_messages | bool | Default: TrueWhether to add a pair of |
handoff_tool_prefix | Optional[str] | Default: NoneOptional prefix for the handoff tools (e.g., If provided, the handoff tools will be named If not provided, the handoff tools will be named |
add_handoff_back_messages | Optional[bool] | Default: NoneWhether to add a pair of |
supervisor_name | str | Default: 'supervisor'Name of the supervisor node. |
include_agent_name | AgentNameMode | None | Default: NoneUse to specify how to expose the agent name to the underlying supervisor LLM.
|