# create_structured_chat_agent

> **Function** in `langchain_classic`

📖 [View in docs](https://reference.langchain.com/python/langchain-classic/agents/structured_chat/base/create_structured_chat_agent)

Create an agent aimed at supporting tools with multiple inputs.

## Signature

```python
create_structured_chat_agent(
    llm: BaseLanguageModel,
    tools: Sequence[BaseTool],
    prompt: ChatPromptTemplate,
    tools_renderer: ToolsRenderer = render_text_description_and_args,
    *,
    stop_sequence: bool | list[str] = True,
) -> Runnable
```

## Description

Prompt:

The prompt must have input keys:
    * `tools`: contains descriptions and arguments for each tool.
    * `tool_names`: contains all tool names.
    * `agent_scratchpad`: contains previous agent actions and tool outputs as a
        string.

Here's an example:

```python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

system = '''Respond to the human as helpfully and accurately as possible. You have access to the following tools:

{tools}

Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).

Valid "action" values: "Final Answer" or {tool_names}

Provide only ONE action per $JSON_BLOB, as shown:

```txt
{{
    "action": $TOOL_NAME,
    "action_input": $INPUT
}}
```

Follow this format:

Question: input question to answer
Thought: consider previous and subsequent steps
Action:
```
$JSON_BLOB
```
Observation: action result
... (repeat Thought/Action/Observation N times)
Thought: I know what to respond
Action:
```txt
{{
    "action": "Final Answer",
    "action_input": "Final response to human"
}}

Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'''

human = '''{input}

{agent_scratchpad}

(reminder to respond in a JSON blob no matter what)'''

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", human),
    ]
)

```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `llm` | `BaseLanguageModel` | Yes | LLM to use as the agent. |
| `tools` | `Sequence[BaseTool]` | Yes | Tools this agent has access to. |
| `prompt` | `ChatPromptTemplate` | Yes | The prompt to use. See Prompt section below for more. |
| `stop_sequence` | `bool \| list[str]` | No | bool or list of str. If `True`, adds a stop token of "Observation:" to avoid hallucinates. If `False`, does not add a stop token. If a list of str, uses the provided list as the stop tokens.  You may to set this to False if the LLM you are using does not support stop sequences. (default: `True`) |
| `tools_renderer` | `ToolsRenderer` | No | This controls how the tools are converted into a string and then passed into the LLM. (default: `render_text_description_and_args`) |

## Returns

`Runnable`

A Runnable sequence representing an agent. It takes as input all the same input

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/9f232caa7a8fe1ca042a401942d5d90d54ceb1a6/libs/langchain/langchain_classic/agents/structured_chat/base.py#L166)