LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsretrieval_qabaseRetrievalQA
    Class●Since v1.0Deprecated

    RetrievalQA

    Copy
    RetrievalQA()

    Bases

    BaseRetrievalQA

    Used in Docs

    • Activeloop Deep lake integration
    • Activeloop Deep memory integration
    • Amazon DocumentDB integration
    • Apache doris integration
    • Bedrock (knowledge bases) integration

    Attributes

    Inherited fromBaseRetrievalQA

    Attributes

    Acombine_documents_chain: BaseCombineDocumentsChain
    —

    Chain to use to combine documents.

    Ainput_key: str | None
    —

    The key from the model Run's inputs to use as the eval input.

    Aoutput_key: str
    View source on GitHub
    A
    return_source_documents
    : bool
    —

    Return the source documents.

    Amodel_config
    Ainput_keys: list[str]
    Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mfrom_llm
    —

    Initialize from llm using default template.

    Mfrom_chain_type
    —

    Load chain from chain type.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Amodel_configAinput_keys: list[str]Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    AnameAmodel_config

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributesAmodel_config

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    retriever: BaseRetriever

    Chain for question-answering against an index.

    This class is deprecated. See below for an example implementation using create_retrieval_chain:

    from langchain_classic.chains import create_retrieval_chain
    from langchain_classic.chains.combine_documents import (
        create_stuff_documents_chain,
    )
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_openai import ChatOpenAI
    
    retriever = ...  # Your retriever
    model = ChatOpenAI()
    
    system_prompt = (
        "Use the given context to answer the question. "
        "If you don't know the answer, say you don't know. "
        "Use three sentence maximum and keep the answer concise. "
        "Context: {context}"
    )
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", system_prompt),
            ("human", "{input}"),
        ]
    )
    question_answer_chain = create_stuff_documents_chain(model, prompt)
    chain = create_retrieval_chain(retriever, question_answer_chain)
    
    chain.invoke({"input": query})

    Example:

    from langchain_openai import OpenAI
    from langchain_classic.chains import RetrievalQA
    from langchain_community.vectorstores import FAISS
    from langchain_core.vectorstores import VectorStoreRetriever
    
    retriever = VectorStoreRetriever(vectorstore=FAISS(...))
    retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)
    M
    set_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool