LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsconversationbaseConversationChain
    Class●Since v1.0Deprecated

    ConversationChain

    Copy
    ConversationChain
    (
    )

    Bases

    LLMChain

    Attributes

    attribute
    memory: BaseMemory
    attribute
    prompt: BasePromptTemplate
    attribute
    input_key: str
    attribute
    output_key: str
    attribute
    model_config
    attribute
    input_keys: list[str]

    Methods

    method
    is_lc_serializable
    method
    validate_prompt_input_variables

    Inherited fromLLMChain

    Attributes

    Allm: BaseLanguageModel | NoneAoutput_parser: AgentOutputParser
    —

    Output parser to use for agent.

    Areturn_final_only: bool
    —

    Whether to return only the final parsed result.

    Allm_kwargs: dictAoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mgenerate
    —

    Generate LLM result from inputs.

    Magenerate
    —

    Generate LLM result from inputs.

    Mprep_prompts
    —

    Prepare prompts from inputs.

    Maprep_prompts
    —

    Prepare prompts from inputs.

    Inherited fromChain

    Attributes

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | None

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mget_lc_namespaceMlc_id

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaA
    View source on GitHub

    Chain to have a conversation and load context from memory.

    This class is deprecated in favor of RunnableWithMessageHistory. Please refer to this tutorial for more detail: https://python.langchain.com/docs/tutorials/chatbot/

    RunnableWithMessageHistory offers several benefits, including:

    • Stream, batch, and async support;
    • More flexible memory handling, including the ability to manage memory outside the chain;
    • Support for multiple threads.

    Below is a minimal implementation, analogous to using ConversationChain with the default ConversationBufferMemory:

    from langchain_core.chat_history import InMemoryChatMessageHistory
    from langchain_core.runnables.history import RunnableWithMessageHistory
    from langchain_openai import ChatOpenAI
    
    store = {}  # memory is maintained outside the chain
    
    def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
        if session_id not in store:
            store[session_id] = InMemoryChatMessageHistory()
        return store[session_id]
    
    model = ChatOpenAI(model="gpt-3.5-turbo-0125")
    
    chain = RunnableWithMessageHistory(model, get_session_history)
    chain.invoke(
        "Hi I'm Bob.",
        config={"configurable": {"session_id": "1"}},
    )  # session_id determines thread

    Memory objects can also be incorporated into the get_session_history callable:

    from langchain_classic.memory import ConversationBufferWindowMemory
    from langchain_core.chat_history import InMemoryChatMessageHistory
    from langchain_core.runnables.history import RunnableWithMessageHistory
    from langchain_openai import ChatOpenAI
    
    store = {}  # memory is maintained outside the chain
    
    def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
        if session_id not in store:
            store[session_id] = InMemoryChatMessageHistory()
            return store[session_id]
    
        memory = ConversationBufferWindowMemory(
            chat_memory=store[session_id],
            k=3,
            return_messages=True,
        )
        assert len(memory.memory_variables) == 1
        key = memory.memory_variables[0]
        messages = memory.load_memory_variables({})[key]
        store[session_id] = InMemoryChatMessageHistory(messages=messages)
        return store[session_id]
    
    model = ChatOpenAI(model="gpt-3.5-turbo-0125")
    
    chain = RunnableWithMessageHistory(model, get_session_history)
    chain.invoke(
        "Hi I'm Bob.",
        config={"configurable": {"session_id": "1"}},
    )  # session_id determines thread

    Example:

    from langchain_classic.chains import ConversationChain
    from langchain_openai import OpenAI
    
    conversation = ConversationChain(llm=OpenAI())
    M
    apply
    —

    Utilize the LLM generate method for speed gains.

    Maapply
    —

    Utilize the LLM generate method for speed gains.

    Mcreate_outputs
    —

    Create outputs from response.

    Mpredict
    —

    Format prompt with kwargs and pass to LLM.

    Mapredict
    —

    Format prompt with kwargs and pass to LLM.

    Mpredict_and_parse
    —

    Call predict and then parse the results.

    Mapredict_and_parse
    —

    Call apredict and then parse the results.

    Mapply_and_parse
    —

    Call apply and then parse the results.

    Maapply_and_parse
    —

    Call apply and then parse the results.

    Mfrom_string
    —

    Create LLMChain from LLM and template.

    A
    callback_manager
    : BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    M
    to_json
    Mto_json_not_implemented
    output_schema
    Aconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaMconfig_schemaMget_config_jsonschemaMget_graphMget_promptsMpipeMpickMassignMinvokeMainvokeMbatchMbatch_as_completedMabatchMabatch_as_completedMstreamMastreamMastream_logMastream_eventsMtransformMatransformMbindMwith_configMwith_listenersMwith_alistenersMwith_typesMwith_retryMmapMwith_fallbacksMas_tool

    Default memory store.

    Default conversation prompt to use.

    Use this since so some prompt vars come from history.

    Validate that prompt input variables are consistent.