LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsconstitutional_aibaseConstitutionalChain
    Class●Since v1.0Deprecated

    ConstitutionalChain

    Copy
    ConstitutionalChain
    (
    )

    Bases

    Chain

    Attributes

    attribute
    chain: LLMChain
    attribute
    constitutional_principles: list[ConstitutionalPrinciple]
    attribute
    critique_chain: LLMChain
    attribute
    revision_chain: LLMChain
    attribute
    return_intermediate_steps: bool
    attribute
    input_keys: list[str]
    attribute
    output_keys: list[str]

    Methods

    method
    get_principles
    method
    from_llm

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Amodel_config

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    AnameAmodel_config

    Methods

    Mto_jsonMconfigurable_fields

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributesAmodel_config

    Methods

    Mis_lc_serializable

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaA
    View source on GitHub

    Chain for applying constitutional principles.

    Note

    This class is deprecated. See below for a replacement implementation using LangGraph. The benefits of this implementation are:

    • Uses LLM tool calling features instead of parsing string responses;
    • Support for both token-by-token and step-by-step streaming;
    • Support for checkpointing and memory of chat history;
    • Easier to modify or extend (e.g., with additional tools, structured responses, etc.)

    Install LangGraph with:

    pip install -U langgraph
    from typing import List, Optional, Tuple
    
    from langchain_classic.chains.constitutional_ai.prompts import (
        CRITIQUE_PROMPT,
        REVISION_PROMPT,
    )
    from langchain_classic.chains.constitutional_ai.models import ConstitutionalPrinciple
    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_openai import ChatOpenAI
    from langgraph.graph import END, START, StateGraph
    from typing_extensions import Annotated, TypedDict
    
    model = ChatOpenAI(model="gpt-4o-mini")
    
    class Critique(TypedDict):
        """Generate a critique, if needed."""
        critique_needed: Annotated[bool, ..., "Whether or not a critique is needed."]
        critique: Annotated[str, ..., "If needed, the critique."]
    
    critique_prompt = ChatPromptTemplate.from_template(
        "Critique this response according to the critique request. "
        "If no critique is needed, specify that.\n\n"
        "Query: {query}\n\n"
        "Response: {response}\n\n"
        "Critique request: {critique_request}"
    )
    
    revision_prompt = ChatPromptTemplate.from_template(
        "Revise this response according to the critique and reivsion request.\n\n"
        "Query: {query}\n\n"
        "Response: {response}\n\n"
        "Critique request: {critique_request}\n\n"
        "Critique: {critique}\n\n"
        "If the critique does not identify anything worth changing, ignore the "
        "revision request and return 'No revisions needed'. If the critique "
        "does identify something worth changing, revise the response based on "
        "the revision request.\n\n"
        "Revision Request: {revision_request}"
    )
    
    chain = model | StrOutputParser()
    critique_chain = critique_prompt | model.with_structured_output(Critique)
    revision_chain = revision_prompt | model | StrOutputParser()
    
    class State(TypedDict):
        query: str
        constitutional_principles: List[ConstitutionalPrinciple]
        initial_response: str
        critiques_and_revisions: List[Tuple[str, str]]
        response: str
    
    async def generate_response(state: State):
        """Generate initial response."""
        response = await chain.ainvoke(state["query"])
        return {"response": response, "initial_response": response}
    
    async def critique_and_revise(state: State):
        """Critique and revise response according to principles."""
        critiques_and_revisions = []
        response = state["initial_response"]
        for principle in state["constitutional_principles"]:
            critique = await critique_chain.ainvoke(
                {
                    "query": state["query"],
                    "response": response,
                    "critique_request": principle.critique_request,
                }
            )
            if critique["critique_needed"]:
                revision = await revision_chain.ainvoke(
                    {
                        "query": state["query"],
                        "response": response,
                        "critique_request": principle.critique_request,
                        "critique": critique["critique"],
                        "revision_request": principle.revision_request,
                    }
                )
                response = revision
                critiques_and_revisions.append((critique["critique"], revision))
            else:
                critiques_and_revisions.append((critique["critique"], ""))
        return {
            "critiques_and_revisions": critiques_and_revisions,
            "response": response,
        }
    
    graph = StateGraph(State)
    graph.add_node("generate_response", generate_response)
    graph.add_node("critique_and_revise", critique_and_revise)
    
    graph.add_edge(START, "generate_response")
    graph.add_edge("generate_response", "critique_and_revise")
    graph.add_edge("critique_and_revise", END)
    app = graph.compile()
    constitutional_principles=[
        ConstitutionalPrinciple(
            critique_request="Tell if this answer is good.",
            revision_request="Give a better answer.",
        )
    ]
    
    query = "What is the meaning of life? Answer in 10 words or fewer."
    
    async for step in app.astream(
        {"query": query, "constitutional_principles": constitutional_principles},
        stream_mode="values",
    ):
        subset = ["initial_response", "critiques_and_revisions", "response"]
        print({k: v for k, v in step.items() if k in subset})

    Example:

    from langchain_openai import OpenAI
    from langchain_classic.chains import LLMChain, ConstitutionalChain
    from langchain_classic.chains.constitutional_ai.models \
        import ConstitutionalPrinciple
    
    llmodelm = OpenAI()
    
    qa_prompt = PromptTemplate(
        template="Q: {question} A:",
        input_variables=["question"],
    )
    qa_chain = LLMChain(llm=model, prompt=qa_prompt)
    
    constitutional_chain = ConstitutionalChain.from_llm(
        llm=model,
        chain=qa_chain,
        constitutional_principles=[
            ConstitutionalPrinciple(
                critique_request="Tell if this answer is good.",
                revision_request="Give a better answer.",
            )
        ],
    )
    
    constitutional_chain.run(question="What is the meaning of life?")
    
    M
    set_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    M
    configurable_alternatives
    M
    get_lc_namespace
    Mlc_id
    Mto_json
    Mto_json_not_implemented
    output_schema
    Aconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaMconfig_schemaMget_config_jsonschemaMget_graphMget_promptsMpipeMpickMassignMinvokeMainvokeMbatchMbatch_as_completedMabatchMabatch_as_completedMstreamMastreamMastream_logMastream_eventsMtransformMatransformMbindMwith_configMwith_listenersMwith_alistenersMwith_typesMwith_retryMmapMwith_fallbacksMas_tool

    Input keys.

    Output keys.

    Get constitutional principles by name.

    Create a chain from an LLM.