LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicagentsstructured_chatbasecreate_structured_chat_agent
    Function●Since v1.0

    create_structured_chat_agent

    Create an agent aimed at supporting tools with multiple inputs.

    Copy
    create_structured_chat_agent(
      llm: BaseLanguageModel,
      tools: Sequence[BaseTool],
      prompt: ChatPromptTemplate,
      tools_renderer: ToolsRenderer = render_text_description_and_args,
      *,
      stop_sequence: bool | list[str] = True
    ) -> Runnable

    Prompt:

    The prompt must have input keys: * tools: contains descriptions and arguments for each tool. * tool_names: contains all tool names. * agent_scratchpad: contains previous agent actions and tool outputs as a string.

    Here's an example:

    from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
    
    system = '''Respond to the human as helpfully and accurately as possible. You have access to the following tools:
    
    {tools}
    
    Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
    
    Valid "action" values: "Final Answer" or {tool_names}
    
    Provide only ONE action per $JSON_BLOB, as shown:
    
    ```txt
    {{
        "action": $TOOL_NAME,
        "action_input": $INPUT
    }}

    Follow this format:

    Question: input question to answer Thought: consider previous and subsequent steps Action:

    $JSON_BLOB
    

    Observation: action result ... (repeat Thought/Action/Observation N times) Thought: I know what to respond Action:

    {{
        "action": "Final Answer",
        "action_input": "Final response to human"
    }}
    
    Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'''
    
    human = '''{input}
    
    {agent_scratchpad}
    
    (reminder to respond in a JSON blob no matter what)'''
    
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", system),
            MessagesPlaceholder("chat_history", optional=True),
            ("human", human),
        ]
    )
    

    Used in Docs

    • Azure AI services toolkit integration

    Parameters

    NameTypeDescription
    llm*BaseLanguageModel

    LLM to use as the agent.

    tools*Sequence[BaseTool]

    Tools this agent has access to.

    prompt*ChatPromptTemplate

    The prompt to use. See Prompt section below for more.

    stop_sequencebool | list[str]
    Default:True

    bool or list of str. If True, adds a stop token of "Observation:" to avoid hallucinates. If False, does not add a stop token. If a list of str, uses the provided list as the stop tokens.

    You may to set this to False if the LLM you are using does not support stop sequences.

    tools_rendererToolsRenderer
    Default:render_text_description_and_args

    This controls how the tools are converted into a string and then passed into the LLM.

    View source on GitHub