# ConversationChain

> **Class** in `langchain_classic`

📖 [View in docs](https://reference.langchain.com/python/langchain-classic/chains/conversation/base/ConversationChain)

Chain to have a conversation and load context from memory.

This class is deprecated in favor of `RunnableWithMessageHistory`. Please refer
to this tutorial for more detail: https://python.langchain.com/docs/tutorials/chatbot/

`RunnableWithMessageHistory` offers several benefits, including:

- Stream, batch, and async support;
- More flexible memory handling, including the ability to manage memory
    outside the chain;
- Support for multiple threads.

Below is a minimal implementation, analogous to using `ConversationChain` with
the default `ConversationBufferMemory`:

    ```python
    from langchain_core.chat_history import InMemoryChatMessageHistory
    from langchain_core.runnables.history import RunnableWithMessageHistory
    from langchain_openai import ChatOpenAI

    store = {}  # memory is maintained outside the chain

    def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
        if session_id not in store:
            store[session_id] = InMemoryChatMessageHistory()
        return store[session_id]

    model = ChatOpenAI(model="gpt-3.5-turbo-0125")

    chain = RunnableWithMessageHistory(model, get_session_history)
    chain.invoke(
        "Hi I'm Bob.",
        config={"configurable": {"session_id": "1"}},
    )  # session_id determines thread
    ```

Memory objects can also be incorporated into the `get_session_history` callable:

    ```python
    from langchain_classic.memory import ConversationBufferWindowMemory
    from langchain_core.chat_history import InMemoryChatMessageHistory
    from langchain_core.runnables.history import RunnableWithMessageHistory
    from langchain_openai import ChatOpenAI

    store = {}  # memory is maintained outside the chain

    def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
        if session_id not in store:
            store[session_id] = InMemoryChatMessageHistory()
            return store[session_id]

        memory = ConversationBufferWindowMemory(
            chat_memory=store[session_id],
            k=3,
            return_messages=True,
        )
        assert len(memory.memory_variables) == 1
        key = memory.memory_variables[0]
        messages = memory.load_memory_variables({})[key]
        store[session_id] = InMemoryChatMessageHistory(messages=messages)
        return store[session_id]

    model = ChatOpenAI(model="gpt-3.5-turbo-0125")

    chain = RunnableWithMessageHistory(model, get_session_history)
    chain.invoke(
        "Hi I'm Bob.",
        config={"configurable": {"session_id": "1"}},
    )  # session_id determines thread
    ```

## Signature

```python
ConversationChain()
```

## Description

**Example:**

```python
from langchain_classic.chains import ConversationChain
from langchain_openai import OpenAI

conversation = ConversationChain(llm=OpenAI())
```

## Extends

- `LLMChain`

## Properties

- `memory`
- `prompt`
- `input_key`
- `output_key`
- `model_config`
- `input_keys`

## Methods

- [`is_lc_serializable()`](https://reference.langchain.com/python/langchain-classic/chains/conversation/base/ConversationChain/is_lc_serializable)
- [`validate_prompt_input_variables()`](https://reference.langchain.com/python/langchain-classic/chains/conversation/base/ConversationChain/validate_prompt_input_variables)

## ⚠️ Deprecated

Deprecated since version 0.2.7.

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/ee95ad6907f5eab94644183393a20aa2a032bb19/libs/langchain/langchain_classic/chains/conversation/base.py#L14)