Create an agent that uses ReAct prompting.
Based on paper "ReAct: Synergizing Reasoning and Acting in Language Models" (https://arxiv.org/abs/2210.03629)
This implementation is based on the foundational ReAct paper but is older and not well-suited for production applications.
For a more robust and feature-rich implementation, we recommend using the
create_agent function from the langchain library.
See the reference doc for more information.
create_react_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: BasePromptTemplate,
output_parser: AgentOutputParser | None = None,
tools_renderer: ToolsRenderer = render_text_description,
*,
stop_sequence: bool | list[str] = True
) -> RunnablePrompt:
The prompt must have input keys:
* tools: contains descriptions and arguments for each tool.
* tool_names: contains all tool names.
* agent_scratchpad: contains previous agent actions and tool outputs as a
string.
Here's an example:
from langchain_core.prompts import PromptTemplate
template = '''Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}'''
prompt = PromptTemplate.from_template(template)| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | LLM to use as the agent. |
tools* | Sequence[BaseTool] | Tools this agent has access to. |
prompt* | BasePromptTemplate | The prompt to use. See Prompt section below for more. |
output_parser | AgentOutputParser | None | Default: NoneAgentOutputParser for parse the LLM output. |
tools_renderer | ToolsRenderer | Default: render_text_descriptionThis controls how the tools are converted into a string and then passed into the LLM. |
stop_sequence | bool | list[str] | Default: Truebool or list of str.
If You may to set this to False if the LLM you are using does not support stop sequences. |