Create an agent aimed at supporting tools with multiple inputs.
create_structured_chat_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: ChatPromptTemplate,
tools_renderer: ToolsRenderer = render_text_description_and_args,
*,
stop_sequence: bool | list[str] = True
) -> RunnablePrompt:
The prompt must have input keys:
* tools: contains descriptions and arguments for each tool.
* tool_names: contains all tool names.
* agent_scratchpad: contains previous agent actions and tool outputs as a
string.
Here's an example:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
system = '''Respond to the human as helpfully and accurately as possible. You have access to the following tools:
{tools}
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
Valid "action" values: "Final Answer" or {tool_names}
Provide only ONE action per $JSON_BLOB, as shown:
```txt
{{
"action": $TOOL_NAME,
"action_input": $INPUT
}}
Follow this format:
Question: input question to answer Thought: consider previous and subsequent steps Action:
$JSON_BLOB
Observation: action result ... (repeat Thought/Action/Observation N times) Thought: I know what to respond Action:
{{
"action": "Final Answer",
"action_input": "Final response to human"
}}
Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'''
human = '''{input}
{agent_scratchpad}
(reminder to respond in a JSON blob no matter what)'''
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
MessagesPlaceholder("chat_history", optional=True),
("human", human),
]
)
| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | LLM to use as the agent. |
tools* | Sequence[BaseTool] | Tools this agent has access to. |
prompt* | ChatPromptTemplate | The prompt to use. See Prompt section below for more. |
stop_sequence | bool | list[str] | Default: Truebool or list of str.
If You may to set this to False if the LLM you are using does not support stop sequences. |
tools_renderer | ToolsRenderer | Default: render_text_description_and_argsThis controls how the tools are converted into a string and then passed into the LLM. |