LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationagentstrajectory_eval_chain
    Module●Since v1.0

    trajectory_eval_chain

    A chain for evaluating ReAct style agents.

    This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes. It uses a language model chain (LLMChain) to generate the reasoning and scores.

    Attributes

    attribute
    EVAL_CHAT_PROMPT
    attribute
    TOOL_FREE_EVAL_CHAT_PROMPT

    Classes

    class
    AgentTrajectoryEvaluator

    Interface for evaluating agent trajectories.

    class
    LLMEvalChain

    A base class for evaluators that use an LLM.

    class
    TrajectoryEval

    A named tuple containing the score and reasoning for a trajectory.

    class
    TrajectoryOutputParser

    Trajectory output parser.

    class
    TrajectoryEvalChain

    A chain for evaluating ReAct style agents.

    This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes. Based on the paper "ReAct: Synergizing Reasoning and Acting in Language Models" (https://arxiv.org/abs/2210.03629)

    Example:

    from langchain_classic.agents import AgentType, initialize_agent
    from langchain_openai import ChatOpenAI
    from langchain_classic.evaluation import TrajectoryEvalChain
    from langchain_classic.tools import tool
    
    @tool
    def geography_answers(country: str, question: str) -> str:
        """Very helpful answers to geography questions."""
        return f"{country}? IDK - We may never know {question}."
    
    model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
    agent = initialize_agent(
        tools=[geography_answers],
        llm=model,
        agent=AgentType.OPENAI_FUNCTIONS,
        return_intermediate_steps=True,
    )
    
    question = "How many dwell in the largest minor region in Argentina?"
    response = agent(question)
    
    eval_chain = TrajectoryEvalChain.from_llm(
        llm=model, agent_tools=[geography_answers], return_reasoning=True
    )
    
    result = eval_chain.evaluate_agent_trajectory(
        input=question,
        agent_trajectory=response["intermediate_steps"],
        prediction=response["output"],
        reference="Paris",
    )
    print(result["score"])  # noqa: T201
    # 0
    
    deprecatedclass
    LLMChain

    Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using LangChain runnables:

    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = OpenAI()
    chain = prompt | model | StrOutputParser()
    
    chain.invoke("your adjective here")
    View source on GitHub