LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsqa_generationbase
    Module●Since v1.0

    base

    Attributes

    attribute
    PROMPT_SELECTOR

    Classes

    class
    Chain

    Abstract base class for creating structured sequences of calls to components.

    Chains should be used to encode a sequence of calls to components like models, document retrievers, other chains, etc., and provide a simple interface to this sequence.

    deprecatedclass
    LLMChain

    Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using LangChain runnables:

    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = OpenAI()
    chain = prompt | model | StrOutputParser()
    
    chain.invoke("your adjective here")
    deprecatedclass
    QAGenerationChain

    Base class for question-answer generation chains.

    This class is deprecated. See below for an alternative implementation.

    Advantages of this implementation include:

    • Supports async and streaming;
    • Surfaces prompt and text splitter for easier customization;
    • Use of JsonOutputParser supports JSONPatch operations in streaming mode, as well as robustness to markdown.
    from langchain_classic.chains.qa_generation.prompt import (
        CHAT_PROMPT as prompt,
    )
    
    # Note: import PROMPT if using a legacy non-chat model.
    from langchain_core.output_parsers import JsonOutputParser
    from langchain_core.runnables import (
        RunnableLambda,
        RunnableParallel,
        RunnablePassthrough,
    )
    from langchain_core.runnables.base import RunnableEach
    from langchain_openai import ChatOpenAI
    from langchain_text_splitters import RecursiveCharacterTextSplitter
    
    model = ChatOpenAI()
    text_splitter = RecursiveCharacterTextSplitter(chunk_overlap=500)
    split_text = RunnableLambda(lambda x: text_splitter.create_documents([x]))
    
    chain = RunnableParallel(
        text=RunnablePassthrough(),
        questions=(
            split_text | RunnableEach(bound=prompt | model | JsonOutputParser())
        ),
    )
    View source on GitHub