langchain.js
    Preparing search index...

    Chain to prioritize tasks.

    Hierarchy (View Summary)

    Index

    Constructors

    Properties

    lc_serializable: boolean = true
    llm: any

    LLM Wrapper to use

    llmKwargs?: any

    Kwargs to pass to LLM

    memory?: any
    outputKey: string = "text"

    Key to use for output, defaults to text

    outputParser?: any

    OutputParser to use

    prompt: BasePromptTemplate

    Prompt object to use

    Accessors

    • get inputKeys(): any

      Returns any

    • get lc_namespace(): string[]

      Returns string[]

    • get outputKeys(): string[]

      Returns string[]

    Methods

    • Return the string type key uniquely identifying this class of chain.

      Returns "llm"

    • Parameters

      • values: any

      Returns Promise<any>

    • Parameters

      • text: string

      Returns Promise<number>

    • Parameters

      • inputs: ChainValues[]
      • Optionalconfig: any[]

      Returns Promise<ChainValues[]>

      Use .batch() instead. Will be removed in 0.2.0.

      Call the chain on all inputs in the list

    • Run the core logic of this chain and add to output if desired.

      Wraps _call and handles memory.

      Parameters

      • values: any
      • Optionalconfig: any

      Returns Promise<ChainValues>

    • Invoke the chain with the provided input and returns the output.

      Parameters

      • input: ChainValues

        Input values for the chain run.

      • Optionaloptions: any

      Returns Promise<ChainValues>

      Promise that resolves with the output of the chain run.

    • Format prompt with values and pass to LLM

      Parameters

      • values: any

        keys to pass to prompt template

      • OptionalcallbackManager: any

        CallbackManager to use

      Returns Promise<string>

      Completion from LLM.

      llm.predict({ adjective: "funny" })
      
    • Parameters

      • inputs: Record<string, unknown>
      • outputs: Record<string, unknown>
      • returnOnlyOutputs: boolean = false

      Returns Promise<Record<string, unknown>>

    • Parameters

      • input: any
      • Optionalconfig: any

      Returns Promise<string>

      Use .invoke() instead. Will be removed in 0.2.0.

    • Static method to create a new TaskPrioritizationChain from a BaseLanguageModel. It generates a prompt using the PromptTemplate class and the task prioritization template, and returns a new instance of TaskPrioritizationChain.

      Parameters

      • fields: Omit<LLMChainInput, "prompt">

        Object with fields used to initialize the chain, excluding the prompt.

      Returns LLMChain

      A new instance of TaskPrioritizationChain.

    • Returns string