LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicagentsreactagentcreate_react_agent
    Function●Since v1.0

    create_react_agent

    Create an agent that uses ReAct prompting.

    Based on paper "ReAct: Synergizing Reasoning and Acting in Language Models" (https://arxiv.org/abs/2210.03629)

    Warning

    This implementation is based on the foundational ReAct paper but is older and not well-suited for production applications.

    For a more robust and feature-rich implementation, we recommend using the create_agent function from the langchain library.

    See the reference doc for more information.

    Copy
    create_react_agent(
      llm: BaseLanguageModel,
      tools: Sequence[BaseTool],
      prompt: BasePromptTemplate,
      output_parser: AgentOutputParser | None = None,
      tools_renderer: ToolsRenderer = render_text_description,
      *,
      stop_sequence: bool | list[str] = True
    ) -> Runnable

    Prompt:

    The prompt must have input keys: * tools: contains descriptions and arguments for each tool. * tool_names: contains all tool names. * agent_scratchpad: contains previous agent actions and tool outputs as a string.

    Here's an example:

    from langchain_core.prompts import PromptTemplate
    
    template = '''Answer the following questions as best you can. You have access to the following tools:
    
    {tools}
    
    Use the following format:
    
    Question: the input question you must answer
    Thought: you should always think about what to do
    Action: the action to take, should be one of [{tool_names}]
    Action Input: the input to the action
    Observation: the result of the action
    ... (this Thought/Action/Action Input/Observation can repeat N times)
    Thought: I now know the final answer
    Final Answer: the final answer to the original input question
    
    Begin!
    
    Question: {input}
    Thought:{agent_scratchpad}'''
    
    prompt = PromptTemplate.from_template(template)

    Used in Docs

    • Amazon Bedrock agentcore browser integration
    • Amazon Bedrock agentcore code interpreter integration
    • CAMB AI integration
    • IBM watsonx.ai SQL integration
    • LangChain v1 migration guide

    Parameters

    NameTypeDescription
    llm*BaseLanguageModel

    LLM to use as the agent.

    tools*Sequence[BaseTool]

    Tools this agent has access to.

    prompt*BasePromptTemplate

    The prompt to use. See Prompt section below for more.

    output_parserAgentOutputParser | None
    Default:None

    AgentOutputParser for parse the LLM output.

    tools_rendererToolsRenderer
    Default:render_text_description

    This controls how the tools are converted into a string and then passed into the LLM.

    stop_sequencebool | list[str]
    Default:True

    bool or list of str. If True, adds a stop token of "Observation:" to avoid hallucinates. If False, does not add a stop token. If a list of str, uses the provided list as the stop tokens.

    You may to set this to False if the LLM you are using does not support stop sequences.

    View source on GitHub