langchain.js
    Preparing search index...

    This class will be removed in 1.0.0. Use the LangChain Expression Language (LCEL) instead. See the example below for how to use LCEL with the LLMChain class:

    Chain to run queries against LLMs.

    import { ChatPromptTemplate } from "@langchain/core/prompts";
    import { ChatOpenAI } from "@langchain/openai";

    const prompt = ChatPromptTemplate.fromTemplate("Tell me a {adjective} joke");
    const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
    const chain = prompt.pipe(llm);

    const response = await chain.invoke({ adjective: "funny" });

    Hierarchy (View Summary)

    Index

    Constructors

    Properties

    lc_serializable: boolean = true
    llm: any

    LLM Wrapper to use

    llmKwargs?: any

    Kwargs to pass to LLM

    memory?: any
    outputKey: string = "text"

    Key to use for output, defaults to text

    outputParser?: any

    OutputParser to use

    prompt: BasePromptTemplate

    Prompt object to use

    Accessors

    • get inputKeys(): any

      Returns any

    • get lc_namespace(): string[]

      Returns string[]

    • get outputKeys(): string[]

      Returns string[]

    Methods

    • Return the string type key uniquely identifying this class of chain.

      Returns "llm"

    • Parameters

      • values: any

      Returns Promise<any>

    • Parameters

      • text: string

      Returns Promise<number>

    • Parameters

      • inputs: ChainValues[]
      • Optionalconfig: any[]

      Returns Promise<ChainValues[]>

      Use .batch() instead. Will be removed in 0.2.0.

      Call the chain on all inputs in the list

    • Run the core logic of this chain and add to output if desired.

      Wraps _call and handles memory.

      Parameters

      • values: any
      • Optionalconfig: any

      Returns Promise<ChainValues>

    • Parameters

      • examples: ChainValues
      • predictions: ChainValues
      • args: EvaluateArgs = ...

      Returns Promise<ChainValues>

    • Invoke the chain with the provided input and returns the output.

      Parameters

      • input: ChainValues

        Input values for the chain run.

      • Optionaloptions: any

      Returns Promise<ChainValues>

      Promise that resolves with the output of the chain run.

    • Format prompt with values and pass to LLM

      Parameters

      • values: any

        keys to pass to prompt template

      • OptionalcallbackManager: any

        CallbackManager to use

      Returns Promise<string>

      Completion from LLM.

      llm.predict({ adjective: "funny" })
      
    • Parameters

      • inputs: Record<string, unknown>
      • outputs: Record<string, unknown>
      • returnOnlyOutputs: boolean = false

      Returns Promise<Record<string, unknown>>

    • Parameters

      • input: any
      • Optionalconfig: any

      Returns Promise<string>

      Use .invoke() instead. Will be removed in 0.2.0.

    • Parameters

      • llm: BaseLanguageModelInterface
      • options: { chainInput?: Omit<LLMChainInput<string, any>, "llm">; prompt?: any } = {}

      Returns QAEvalChain

    • Returns string