LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationscoringeval_chainLabeledScoreStringEvalChainfrom_llm
    Method●Since v1.0

    from_llm

    Initialize the LabeledScoreStringEvalChain from an LLM.

    Copy
    from_llm(
      cls,
      llm: BaseLanguageModel,
      *,
      prompt: PromptTemplate | None = None,
      criteria: CRITERIA_TYPE | str | None = None,
      normalize_by: float | None = None,
      **kwargs: Any = {}
    ) -> LabeledScoreStringEvalChain

    Parameters

    NameTypeDescription
    llm*BaseLanguageModel

    The LLM to use.

    promptPromptTemplate | None
    Default:None

    The prompt to use.

    criteriaCRITERIA_TYPE | str | None
    Default:None

    The criteria to use.

    normalize_byfloat | None
    Default:None

    The value to normalize the score by.

    **kwargsAny
    Default:{}

    Additional keyword arguments.

    View source on GitHub