langchain.js
    Preparing search index...

    Class MultiPromptChain

    A class that represents a multi-prompt chain in the LangChain framework. It extends the MultiRouteChain class and provides additional functionality specific to multi-prompt chains.

    const multiPromptChain = MultiPromptChain.fromLLMAndPrompts(
    new ChatOpenAI({ model: "gpt-4o-mini" }),
    {
    promptNames: ["physics", "math", "history"],
    promptDescriptions: [
    "Good for answering questions about physics",
    "Good for answering math questions",
    "Good for answering questions about history",
    ],
    promptTemplates: [
    `You are a very smart physics professor. Here is a question:\n{input}\n`,
    `You are a very good mathematician. Here is a question:\n{input}\n`,
    `You are a very smart history professor. Here is a question:\n{input}\n`,
    ],
    }
    );
    const result = await multiPromptChain.call({
    input: "What is the speed of light?",
    });

    Hierarchy (View Summary)

    Index

    Constructors

    Properties

    defaultChain: BaseChain
    destinationChains: { [name: string]: BaseChain<ChainValues, ChainValues> }
    memory?: any
    routerChain: RouterChain
    silentErrors: boolean = false

    Accessors

    • get inputKeys(): string[]

      Returns string[]

    • get lc_namespace(): string[]

      Returns string[]

    • get outputKeys(): string[]

      Returns string[]

    Methods

    • Run the core logic of this chain and return the output

      Parameters

      • values: ChainValues
      • OptionalrunManager: any

      Returns Promise<ChainValues>

    • Return the string type key uniquely identifying this class of chain.

      Returns string

    • Parameters

      • values: any

      Returns Promise<any>

    • Parameters

      • inputs: ChainValues[]
      • Optionalconfig: any[]

      Returns Promise<ChainValues[]>

      Use .batch() instead. Will be removed in 0.2.0.

      Call the chain on all inputs in the list

    • Parameters

      • values: any
      • Optionalconfig: any
      • Optionaltags: string[]

      Returns Promise<ChainValues>

      Use .invoke() instead. Will be removed in 0.2.0.

      Run the core logic of this chain and add to output if desired.

      Wraps _call and handles memory.

    • Invoke the chain with the provided input and returns the output.

      Parameters

      • input: ChainValues

        Input values for the chain run.

      • Optionaloptions: any

      Returns Promise<ChainValues>

      Promise that resolves with the output of the chain run.

    • Parameters

      • inputs: Record<string, unknown>
      • outputs: Record<string, unknown>
      • returnOnlyOutputs: boolean = false

      Returns Promise<Record<string, unknown>>

    • Parameters

      • input: any
      • Optionalconfig: any

      Returns Promise<string>

      Use .invoke() instead. Will be removed in 0.2.0.

    • A static method that creates an instance of MultiPromptChain from a BaseLanguageModel and a set of prompts. It takes in optional parameters for the default chain and additional options.

      Parameters

      • llm: BaseLanguageModelInterface

        A BaseLanguageModel instance.

      • __namedParameters: {
            conversationChainOpts?: Omit<
                LLMChainInput<string, any>,
                "llm" | "outputKey",
            >;
            defaultChain?: BaseChain<ChainValues, ChainValues>;
            llmChainOpts?: Omit<LLMChainInput<string, any>, "llm" | "prompt">;
            multiRouteChainOpts?: Omit<MultiRouteChainInput, "defaultChain">;
            promptDescriptions: string[];
            promptNames: string[];
            promptTemplates: string[] | PromptTemplate[];
        }

      Returns MultiPromptChain

      An instance of MultiPromptChain.

    • Parameters

      • llm: BaseLanguageModelInterface
      • promptNames: string[]
      • promptDescriptions: string[]
      • promptTemplates: string[] | PromptTemplate[]
      • OptionaldefaultChain: BaseChain<ChainValues, ChainValues>
      • Optionaloptions: Omit<MultiRouteChainInput, "defaultChain">

      Returns MultiPromptChain

      Use fromLLMAndPrompts instead

    • Returns string