LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicagentstool_calling_agentbasecreate_tool_calling_agent
    Function●Since v1.0

    create_tool_calling_agent

    Create an agent that uses tools.

    Copy
    create_tool_calling_agent(
      llm: BaseLanguageModel,
      tools: Sequence[BaseTool],
      prompt: ChatPromptTemplate,
      *,
      message_formatter: MessageFormatter = format_to_tool_messages
    ) -> Runnable

    Example:

    from langchain_classic.agents import (
        AgentExecutor,
        create_tool_calling_agent,
        tool,
    )
    from langchain_anthropic import ChatAnthropic
    from langchain_core.prompts import ChatPromptTemplate
    
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", "You are a helpful assistant"),
            ("placeholder", "{chat_history}"),
            ("human", "{input}"),
            ("placeholder", "{agent_scratchpad}"),
        ]
    )
    model = ChatAnthropic(model="claude-opus-4-1-20250805")
    
    @tool
    def magic_function(input: int) -> int:
        """Applies a magic function to an input."""
        return input + 2
    
    tools = [magic_function]
    
    agent = create_tool_calling_agent(model, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
    
    agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
    
    # Using with chat history
    from langchain_core.messages import AIMessage, HumanMessage
    agent_executor.invoke(
        {
            "input": "what's my name?",
            "chat_history": [
                HumanMessage(content="hi! my name is bob"),
                AIMessage(content="Hello Bob! How can I assist you today?"),
            ],
        }
    )

    Prompt:

    The agent prompt must have an agent_scratchpad key that is a MessagesPlaceholder. Intermediate agent actions and tool output messages will be passed in here.

    Troubleshooting:

    • If you encounter invalid_tool_calls errors, ensure that your tool functions return properly formatted responses. Tool outputs should be serializable to JSON. For custom objects, implement proper str or to_dict methods.

    Used in Docs

    • Azure container apps dynamic sessions integration
    • Bing search integration
    • cloro
    • Databricks unity catalog (Uc) integration
    • Financialdatasets toolkit integration

    Parameters

    NameTypeDescription
    llm*BaseLanguageModel

    LLM to use as the agent.

    tools*Sequence[BaseTool]

    Tools this agent has access to.

    prompt*ChatPromptTemplate

    The prompt to use. See Prompt section below for more on the expected input variables.

    message_formatterMessageFormatter
    Default:format_to_tool_messages

    Formatter function to convert (AgentAction, tool output) tuples into FunctionMessages.

    View source on GitHub