LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsopenai_functionscitation_fuzzy_match
    Module●Since v1.0

    citation_fuzzy_match

    Functions

    function
    get_llm_kwargs

    Return the kwargs for the LLMChain constructor.

    function
    create_citation_fuzzy_match_runnable

    Create a citation fuzzy match Runnable.

    Example usage:

    from langchain_classic.chains import create_citation_fuzzy_match_runnable
    from langchain_openai import ChatOpenAI
    
    model = ChatOpenAI(model="gpt-4o-mini")
    
    context = "Alice has blue eyes. Bob has brown eyes. Charlie has green eyes."
    question = "What color are Bob's eyes?"
    
    chain = create_citation_fuzzy_match_runnable(model)
    chain.invoke({"question": question, "context": context})
    deprecatedfunction
    create_citation_fuzzy_match_chain

    Create a citation fuzzy match chain.

    Classes

    class
    FactWithEvidence

    Class representing a single statement.

    Each fact has a body and a list of sources. If there are multiple facts make sure to break them apart such that each one only uses a set of sources that are relevant to it.

    class
    QuestionAnswer

    A question and its answer as a list of facts.

    Each fact should have a source. Each sentence contains a body and a list of sources.

    deprecatedclass
    LLMChain

    Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using LangChain runnables:

    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = OpenAI()
    chain = prompt | model | StrOutputParser()
    
    chain.invoke("your adjective here")
    View source on GitHub