LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationregex_matchbaseRegexMatchStringEvaluator
    Class●Since v1.0

    RegexMatchStringEvaluator

    Compute a regex match between the prediction and the reference.

    Examples:

    evaluator = RegexMatchStringEvaluator(flags=re.IGNORECASE) evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="^mindy.*cto$", ) # This will return {'score': 1.0} due to the IGNORECASE flag

    evaluator = RegexMatchStringEvaluator() evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="^Mike.*CEO$", ) # This will return {'score': 0.0}

    evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="^Mike.*CEO$|^Mindy.*CTO$", ) # This will return {'score': 1.0} as the prediction matches the second pattern in the union

    Copy
    RegexMatchStringEvaluator(
      self,
      *,
      flags: int = 0,
      **_: Any = {}
    )

    Bases

    StringEvaluator

    Parameters

    NameTypeDescription
    flagsint
    Default:0

    Flags to use for the regex match. Defaults to no flags.

    Constructors

    constructor
    __init__
    NameType
    flagsint

    Attributes

    attribute
    flags: flags
    attribute
    requires_input: bool

    This evaluator does not require input.

    attribute
    requires_reference: bool

    This evaluator requires a reference.

    attribute
    input_keys: list[str]

    Get the input keys.

    attribute
    evaluation_name: str

    Get the evaluation name.

    Inherited fromStringEvaluator

    Methods

    Mevaluate_strings
    —

    Evaluate Chain or LLM output, based on optional input and label.

    Maevaluate_strings
    —

    Asynchronously evaluate Chain or LLM output, based on optional input and label.

    View source on GitHub