LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationagentstrajectory_eval_chainTrajectoryEvalChain
    Class●Since v1.0

    TrajectoryEvalChain

    Copy
    TrajectoryEvalChain()

    Bases

    AgentTrajectoryEvaluatorLLMEvalChain

    Attributes

    Methods

    Inherited fromAgentTrajectoryEvaluator

    Attributes

    Arequires_input: bool
    —

    Whether this evaluator requires an input string.

    Methods

    Mevaluate_agent_trajectory
    —

    Evaluate a trajectory.

    Maevaluate_agent_trajectory
    —
    View source on GitHub

    Asynchronously evaluate a trajectory.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    agent_tools: list[BaseTool] | None

    A list of tools available to the agent.

    attribute
    eval_chain: LLMChain

    The language model chain used for evaluation.

    attribute
    output_parser: TrajectoryOutputParser

    The output parser used to parse the output.

    attribute
    return_reasoning: bool

    DEPRECATED. Reasoning always returned.

    attribute
    model_config
    attribute
    requires_reference: bool

    Whether this evaluator requires a reference label.

    attribute
    input_keys: list[str]

    Get the input keys for the chain.

    attribute
    output_keys: list[str]

    Get the output keys for the chain.

    method
    get_agent_trajectory

    Get the agent trajectory as a formatted string.

    method
    from_llm

    Create a TrajectoryEvalChain object from a language model chain.

    method
    prep_inputs

    Validate and prep inputs.

    A chain for evaluating ReAct style agents.

    This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes. Based on the paper "ReAct: Synergizing Reasoning and Acting in Language Models" (https://arxiv.org/abs/2210.03629)

    Example:

    from langchain_classic.agents import AgentType, initialize_agent
    from langchain_openai import ChatOpenAI
    from langchain_classic.evaluation import TrajectoryEvalChain
    from langchain_classic.tools import tool
    
    @tool
    def geography_answers(country: str, question: str) -> str:
        """Very helpful answers to geography questions."""
        return f"{country}? IDK - We may never know {question}."
    
    model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
    agent = initialize_agent(
        tools=[geography_answers],
        llm=model,
        agent=AgentType.OPENAI_FUNCTIONS,
        return_intermediate_steps=True,
    )
    
    question = "How many dwell in the largest minor region in Argentina?"
    response = agent(question)
    
    eval_chain = TrajectoryEvalChain.from_llm(
        llm=model, agent_tools=[geography_answers], return_reasoning=True
    )
    
    result = eval_chain.evaluate_agent_trajectory(
        input=question,
        agent_trajectory=response["intermediate_steps"],
        prediction=response["output"],
        reference="Paris",
    )
    print(result["score"])  # noqa: T201
    # 0
    
    M
    set_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool