LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsllmLLMChain
    Class●Since v1.0Deprecated

    LLMChain

    Copy
    LLMChain()

    Bases

    Chain

    Used in Docs

    • Aim integrations
    • Alibaba cloud pai eas integration
    • Anyscale integration
    • Aphrodite engine integration
    • Argilla integration
    (41 more not shown)

    Attributes

    Methods

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags
    View source on GitHub
    : list[str] | None
    Ametadata: dict[str, Any] | None
    Acallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    prompt: BasePromptTemplate

    Prompt object to use.

    attribute
    llm: Runnable[LanguageModelInput, str] | Runnable[LanguageModelInput, BaseMessage]

    Language model to call.

    attribute
    output_key: str
    attribute
    output_parser: BaseLLMOutputParser

    Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise.

    attribute
    return_final_only: bool

    Whether to return only the final parsed result. If False, will return a bunch of extra information about the generation.

    attribute
    llm_kwargs: dict
    attribute
    model_config
    attribute
    input_keys: list[str]

    Will be whatever keys the prompt expects.

    attribute
    output_keys: list[str]

    Will always return text key.

    method
    is_lc_serializable
    method
    generate

    Generate LLM result from inputs.

    method
    agenerate

    Generate LLM result from inputs.

    method
    prep_prompts

    Prepare prompts from inputs.

    method
    aprep_prompts

    Prepare prompts from inputs.

    method
    apply

    Utilize the LLM generate method for speed gains.

    method
    aapply

    Utilize the LLM generate method for speed gains.

    method
    create_outputs

    Create outputs from response.

    method
    predict

    Format prompt with kwargs and pass to LLM.

    method
    apredict

    Format prompt with kwargs and pass to LLM.

    method
    predict_and_parse

    Call predict and then parse the results.

    method
    apredict_and_parse

    Call apredict and then parse the results.

    method
    apply_and_parse

    Call apply and then parse the results.

    method
    aapply_and_parse

    Call apply and then parse the results.

    method
    from_string

    Create LLMChain from LLM and template.

    Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using LangChain runnables:

    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = OpenAI()
    chain = prompt | model | StrOutputParser()
    
    chain.invoke("your adjective here")

    Example:

    from langchain_classic.chains import LLMChain
    from langchain_openai import OpenAI
    from langchain_core.prompts import PromptTemplate
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = LLMChain(llm=OpenAI(), prompt=prompt)
    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool