LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/coreutilstestingFakeListChatModel
Class●Since v1.0

FakeListChatModel

A fake Chat Model that returns a predefined list of responses. It can be used for testing purposes.

Copy
class FakeListChatModel

Bases

BaseChatModel<FakeListChatModelCallOptions>

Used in Docs

  • Fake integration

Example

Copy
const chat = new FakeListChatModel({
  responses: ["I'll callback later.", "You 'console' them!"]
});

const firstMessage = new HumanMessage("You want to hear a JavaScript joke?");
const secondMessage = new HumanMessage("How do you cheer up a JavaScript developer?");

// Call the chat model with a message and log the response
const firstResponse = await chat.call([firstMessage]);
console.log({ firstResponse });

const secondResponse = await chat.call([secondMessage]);
console.log({ secondResponse });

Constructors

constructor
constructor

Properties

property
cache: BaseCache<Generation[]>
property
callbacks: Callbacks

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

property
caller: AsyncCaller

The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.

property
disableStreaming: boolean
property
emitCustomEvent: boolean
property
generationInfo: Record<string, unknown>

Raw generation info response from the provider. May include things like reason for finishing (e.g. in OpenAI)

property
i: number
property
lc_kwargs: SerializedFields
property
lc_namespace: string[]

A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.

property
lc_runnable: boolean
property
lc_serializable: boolean
property
metadata: Record<string, unknown>
property
name: string
property
outputVersion: MessageOutputVersion
property
ParsedCallOptions: Omit<CallOptions, Exclude<keyof RunnableConfig, "signal" | "timeout" | "maxConcurrency">>
property
responses: string[]
property
sleep: number
property
tags: string[]
property
toolStyle: "openai" | "anthropic" | "bedrock" | "google"
property
verbose: boolean
property
callKeys: string[]
property
lc_aliases: __type | undefined
property
lc_attributes: __type | undefined
property
lc_id: string[]
property
lc_secrets: __type | undefined
property
lc_serializable_keys: string[] | undefined
property
profile: ModelProfile

Methods

method
_batchWithConfig→ Promise<Error | RunOutput[]>

Internal method that handles batching and configuration for a runnable It takes a function, input values, and optional configuration, and returns a promise that resolves to the output values.

method
_callWithConfig
method
_combineLLMOutput→ Record<string, any> | undefined
method
_createResponseChunk→ ChatGenerationChunk
method
_currentResponse→ string
method
_formatGeneration→ __type
method
_generate→ Promise<ChatResult>
method
_generateCached→ Promise<LLMResult & __type>
method
_getOptionsList
method
_getSerializedCacheKeyParametersForCall→ string

Create a unique cache key for a specific call to a specific language model.

method
_identifyingParams→ Record<string, any>

Get the identifying parameters of the LLM.

method
_incrementResponse
method
_llmType→ string
method
_modelType→ string
method
_separateRunnableConfigFromCallOptions
method
_separateRunnableConfigFromCallOptionsCompat
method
_sleep→ Promise<void>
method
_sleepIfRequested→ Promise<void>
method
_streamIterator→ AsyncGenerator<RunOutput>

Default streaming implementation. Subclasses should override this method if they support streaming output.

method
_streamLog
method
_streamResponseChunks→ AsyncGenerator<ChatGenerationChunk>
method
_transformStreamWithConfig
method
assign→ Runnable

Assigns new fields to the dict output of this runnable. Returns a new runnable.

method
asTool→ RunnableToolLike<InteropZodType<ToolCall<string, Record<string, any>> | T>, RunOutput>

Convert a runnable to a tool. Return a new instance of RunnableToolLike which contains the runnable, name, description and schema.

method
batch→ Promise<RunOutput[]>

Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.

method
bindTools→ Runnable<BaseLanguageModelInput, OutputMessageType, CallOptions>

Bind tool-like objects to this chat model.

method
generate→ Promise<LLMResult>

Generates chat based on the input messages.

method
generatePrompt→ Promise<LLMResult>
method
getGraph→ Graph
method
getLsParams→ LangSmithParams
method
getName→ string
method
getNumTokens→ Promise<number>

Get the number of tokens in the content.

method
invocationParams→ any

Get the parameters used to invoke the model

method
invoke→ Promise<RunOutput>

Method to invoke the document transformation. This method calls the transformDocuments method with the provided input.

method
pick→ Runnable

Pick keys from the dict output of this runnable. Returns a new runnable.

method
pipe→ Runnable<RunInput, Exclude<NewRunOutput, Error>>

Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.

method
stream→ Promise<IterableReadableStream<RunOutput>>

Stream output in chunks.

method
streamEvents→ IterableReadableStream<StreamEvent>

Generate a stream of events emitted by the internal steps of the runnable.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: string - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: string - The name of the runnable that generated the event.
  • run_id: string - Randomly generated ID associated with the given execution of the runnable that emitted the event. A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID.
  • tags: string[] - The tags of the runnable that generated the event.
  • metadata: Record<string, any> - The metadata of the runnable that generated the event.
  • data: Record<string, any>

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

+----------------------+-----------------------------+------------------------------------------+
| event                | input                       | output/chunk                             |
+======================+=============================+==========================================+
| on_chat_model_start  | {"messages": BaseMessage[]} |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_stream |                             | AIMessageChunk("hello")                  |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_end    | {"messages": BaseMessage[]} | AIMessageChunk("hello world")            |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_start         | {'input': 'hello'}          |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_stream        |                             | 'Hello'                                  |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_end           | 'Hello human!'              |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_start       |                             |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_stream      |                             | "hello world!"                           |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_end         | [Document(...)]             | "hello world!, goodbye world!"           |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_start        | {"x": 1, "y": "2"}          |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_end          |                             | {"x": 1, "y": "2"}                       |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_start   | {"query": "hello"}          |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_end     | {"query": "hello"}          | [Document(...), ..]                      |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_start      | {"question": "hello"}       |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_end        | {"question": "hello"}       | ChatPromptValue(messages: BaseMessage[]) |
+----------------------+-----------------------------+------------------------------------------+

The "on_chain_*" events are the default for Runnables that don't fit one of the above categories.

In addition to the standard events above, users can also dispatch custom events.

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+------------------------------------------------------------+
| Attribute | Type | Description                                                |
+===========+======+============================================================+
| name      | str  | A user defined name for the event.                         |
+-----------+------+------------------------------------------------------------+
| data      | Any  | The data associated with the event. This can be anything.  |
+-----------+------+------------------------------------------------------------+

Here's an example:

import { RunnableLambda } from "@langchain/core/runnables";
import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch";
// Use this import for web environments that don't support "async_hooks"
// and manually pass config to child runs.
// import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch/web";

const slowThing = RunnableLambda.from(async (someInput: string) => {
  // Placeholder for some slow operation
  await new Promise((resolve) => setTimeout(resolve, 100));
  await dispatchCustomEvent("progress_event", {
   message: "Finished step 1 of 2",
 });
 await new Promise((resolve) => setTimeout(resolve, 100));
 return "Done";
});

const eventStream = await slowThing.streamEvents("hello world", {
  version: "v2",
});

for await (const event of eventStream) {
 if (event.event === "on_custom_event") {
   console.log(event);
 }
}
method
streamLog→ AsyncGenerator<RunLogPatch>

Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

method
toJSON→ Serialized
method
toJSONNotImplemented→ SerializedNotImplemented
method
transform→ AsyncGenerator<RunOutput>

Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

method
withConfig→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>

Bind config to a Runnable, returning a new Runnable.

method
withFallbacks→ RunnableWithFallbacks<RunInput, RunOutput>

Create a new runnable from the current one that will try invoking other passed fallback runnables if the initial invocation fails.

method
withListeners→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>

Bind lifecycle listeners to a Runnable, returning a new Runnable. The Run object contains information about the run, including its id, type, input, output, error, startTime, endTime, and any tags or metadata added to the run.

method
withRetry→ RunnableRetry<RunInput, RunOutput, RunnableConfig<Record<string, any>>>

Add retry logic to an existing runnable.

method
withStructuredOutput→ Runnable<BaseLanguageModelInput, RunOutput>

Model wrapper that returns outputs formatted to match the given schema.

method
_convertInputToPromptValue
method
isRunnable→ thing is Runnable<any, any, RunnableConfig<Record<string, any>>>
method
lc_name→ string

The name of the serializable. Override to provide an alias or to preserve the serialized module name in minified environments.

Implemented as a static method to support loading logic.

deprecatedmethod
serialize→ SerializedLLM
deprecatedmethod
deserialize→ Promise<BaseLanguageModel<any, BaseLanguageModelCallOptions>>

Inherited fromBaseChatModel

Properties

Pcache: BaseCache<Generation[]>Pcallbacks: Callbacks
—

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).

Pcaller: AsyncCaller
—

The async caller should be used by subclasses to make any async calls,

PdisableStreaming: booleanPlc_kwargs: SerializedFieldsPlc_namespace: ["langchain_core", "callbacks", string]
—

A path to the module that contains the class, eg. ["langchain", "llms"]

Plc_runnablePlc_serializable: booleanPmetadata: Record<string, unknown>Pname: stringPoutputVersion: MessageOutputVersionPParsedCallOptions: Omit<CallOptions, Exclude<keyof RunnableConfig, "signal" | "timeout" | "maxConcurrency">>Ptags: string[]Pverbose: booleanPcallKeys: string[]Plc_aliases: __type | undefinedPlc_attributes: __type | undefinedPlc_id: string[]Plc_secrets: __type | undefinedPlc_serializable_keys: string[] | undefinedPprofile: ModelProfile

Methods

M_batchWithConfig→ Promise<Error | RunOutput[]>
—

Internal method that handles batching and configuration for a runnable

M_callWithConfigM_combineLLMOutput→ Record<string, any> | undefinedM_generate→ Promise<ChatResult>M_generateCached→ Promise<LLMResult & __type>M_getOptionsListM_getSerializedCacheKeyParametersForCall→ string
—

Create a unique cache key for a specific call to a specific language model.

M_identifyingParams→ Record<string, any>
—

Get the identifying parameters of the LLM.

M_llmType→ stringM_modelType→ stringM_separateRunnableConfigFromCallOptionsM_separateRunnableConfigFromCallOptionsCompatM_streamIterator→ AsyncGenerator<RunOutput>
—

Default streaming implementation.

M_streamLogM_streamResponseChunks→ AsyncGenerator<ChatGenerationChunk>M_transformStreamWithConfigMassign→ Runnable
—

Assigns new fields to the dict output of this runnable. Returns a new runnable.

MasTool→ RunnableToolLike<InteropZodType<ToolCall<string, Record<string, any>> | T>, RunOutput>
—

Convert a runnable to a tool. Return a new instance of RunnableToolLike

Mbatch→ Promise<RunOutput[]>
—

Default implementation of batch, which calls invoke N times.

MbindTools→ Runnable<BaseLanguageModelInput, OutputMessageType, CallOptions>
—

Bind tool-like objects to this chat model.

Mgenerate→ Promise<LLMResult>
—

Generates chat based on the input messages.

MgeneratePrompt→ Promise<LLMResult>MgetGraph→ GraphMgetLsParams→ LangSmithParamsMgetName→ stringMgetNumTokens→ Promise<number>
—

Get the number of tokens in the content.

MinvocationParams→ any
—

Get the parameters used to invoke the model

Minvoke→ Promise<RunOutput>
—

Method to invoke the document transformation. This method calls the

Mpick→ Runnable
—

Pick keys from the dict output of this runnable. Returns a new runnable.

Mpipe→ Runnable<RunInput, Exclude<NewRunOutput, Error>>
—

Create a new runnable sequence that runs each individual runnable in series,

Mserialize→ SerializedLLMMstream→ Promise<IterableReadableStream<RunOutput>>
—

Stream output in chunks.

MstreamEvents→ IterableReadableStream<StreamEvent>
—

Generate a stream of events emitted by the internal steps of the runnable.

MstreamLog→ AsyncGenerator<RunLogPatch>
—

Stream all output from a runnable, as reported to the callback system.

MtoJSON→ SerializedMtoJSONNotImplemented→ SerializedNotImplementedMtransform→ AsyncGenerator<RunOutput>
—

Default implementation of transform, which buffers input and then calls stream.

MwithConfig→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind config to a Runnable, returning a new Runnable.

MwithFallbacks→ RunnableWithFallbacks<RunInput, RunOutput>
—

Create a new runnable from the current one that will try invoking

MwithListeners→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind lifecycle listeners to a Runnable, returning a new Runnable.

MwithRetry→ RunnableRetry<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Add retry logic to an existing runnable.

MwithStructuredOutput→ Runnable<BaseLanguageModelInput, RunOutput>
—

Model wrapper that returns outputs formatted to match the given schema.

M_convertInputToPromptValueMdeserialize→ Promise<BaseLanguageModel<any, BaseLanguageModelCallOptions>>MisRunnable→ thing is Runnable<any, any, RunnableConfig<Record<string, any>>>Mlc_name→ string
—

The name of the serializable. Override to provide an alias or

Inherited fromBaseLanguageModel

Properties

Pcache: BaseCache<Generation[]>Pcallbacks: Callbacks
—

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).

Pcaller: AsyncCaller
—

The async caller should be used by subclasses to make any async calls,

Plc_kwargs: SerializedFieldsPlc_namespace: ["langchain_core", "callbacks", string]
—

A path to the module that contains the class, eg. ["langchain", "llms"]

Plc_runnablePlc_serializable: booleanPmetadata: Record<string, unknown>Pname: stringPtags: string[]Pverbose: booleanPcallKeys: string[]Plc_aliases: __type | undefinedPlc_attributes: __type | undefinedPlc_id: string[]Plc_secrets: __type | undefinedPlc_serializable_keys: string[] | undefinedPprofile: ModelProfile

Methods

M_batchWithConfig→ Promise<Error | RunOutput[]>
—

Internal method that handles batching and configuration for a runnable

M_callWithConfigM_getOptionsListM_getSerializedCacheKeyParametersForCall→ string
—

Create a unique cache key for a specific call to a specific language model.

M_identifyingParams→ Record<string, any>
—

Get the identifying parameters of the LLM.

M_llmType→ stringM_modelType→ stringM_separateRunnableConfigFromCallOptionsM_streamIterator→ AsyncGenerator<RunOutput>
—

Default streaming implementation.

M_streamLogM_transformStreamWithConfigMassign→ Runnable
—

Assigns new fields to the dict output of this runnable. Returns a new runnable.

MasTool→ RunnableToolLike<InteropZodType<ToolCall<string, Record<string, any>> | T>, RunOutput>
—

Convert a runnable to a tool. Return a new instance of RunnableToolLike

Mbatch→ Promise<RunOutput[]>
—

Default implementation of batch, which calls invoke N times.

MgeneratePrompt→ Promise<LLMResult>MgetGraph→ GraphMgetName→ stringMgetNumTokens→ Promise<number>
—

Get the number of tokens in the content.

Minvoke→ Promise<RunOutput>
—

Method to invoke the document transformation. This method calls the

Mpick→ Runnable
—

Pick keys from the dict output of this runnable. Returns a new runnable.

Mpipe→ Runnable<RunInput, Exclude<NewRunOutput, Error>>
—

Create a new runnable sequence that runs each individual runnable in series,

Mserialize→ SerializedLLMMstream→ Promise<IterableReadableStream<RunOutput>>
—

Stream output in chunks.

MstreamEvents→ IterableReadableStream<StreamEvent>
—

Generate a stream of events emitted by the internal steps of the runnable.

MstreamLog→ AsyncGenerator<RunLogPatch>
—

Stream all output from a runnable, as reported to the callback system.

MtoJSON→ SerializedMtoJSONNotImplemented→ SerializedNotImplementedMtransform→ AsyncGenerator<RunOutput>
—

Default implementation of transform, which buffers input and then calls stream.

MwithConfig→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind config to a Runnable, returning a new Runnable.

MwithFallbacks→ RunnableWithFallbacks<RunInput, RunOutput>
—

Create a new runnable from the current one that will try invoking

MwithListeners→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind lifecycle listeners to a Runnable, returning a new Runnable.

MwithRetry→ RunnableRetry<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Add retry logic to an existing runnable.

MwithStructuredOutput→ Runnable<BaseLanguageModelInput, RunOutput>
—

Model wrapper that returns outputs formatted to match the given schema.

M_convertInputToPromptValueMdeserialize→ Promise<BaseLanguageModel<any, BaseLanguageModelCallOptions>>MisRunnable→ thing is Runnable<any, any, RunnableConfig<Record<string, any>>>Mlc_name→ string
—

The name of the serializable. Override to provide an alias or

Inherited fromBaseLangChain

Properties

Pcallbacks: Callbacks
—

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).

Plc_kwargs: SerializedFieldsPlc_namespace: ["langchain_core", "callbacks", string]
—

A path to the module that contains the class, eg. ["langchain", "llms"]

Plc_runnablePlc_serializable: booleanPmetadata: Record<string, unknown>Pname: stringPtags: string[]Pverbose: booleanPlc_aliases: __type | undefinedPlc_attributes: __type | undefinedPlc_id: string[]Plc_secrets: __type | undefinedPlc_serializable_keys: string[] | undefined

Methods

M_batchWithConfig→ Promise<Error | RunOutput[]>
—

Internal method that handles batching and configuration for a runnable

M_callWithConfigM_getOptionsListM_separateRunnableConfigFromCallOptionsM_streamIterator→ AsyncGenerator<RunOutput>
—

Default streaming implementation.

M_streamLogM_transformStreamWithConfigMassign→ Runnable
—

Assigns new fields to the dict output of this runnable. Returns a new runnable.

MasTool→ RunnableToolLike<InteropZodType<ToolCall<string, Record<string, any>> | T>, RunOutput>
—

Convert a runnable to a tool. Return a new instance of RunnableToolLike

Mbatch→ Promise<RunOutput[]>
—

Default implementation of batch, which calls invoke N times.

MgetGraph→ GraphMgetName→ stringMinvoke→ Promise<RunOutput>
—

Method to invoke the document transformation. This method calls the

Mpick→ Runnable
—

Pick keys from the dict output of this runnable. Returns a new runnable.

Mpipe→ Runnable<RunInput, Exclude<NewRunOutput, Error>>
—

Create a new runnable sequence that runs each individual runnable in series,

Mstream→ Promise<IterableReadableStream<RunOutput>>
—

Stream output in chunks.

MstreamEvents→ IterableReadableStream<StreamEvent>
—

Generate a stream of events emitted by the internal steps of the runnable.

MstreamLog→ AsyncGenerator<RunLogPatch>
—

Stream all output from a runnable, as reported to the callback system.

MtoJSON→ SerializedMtoJSONNotImplemented→ SerializedNotImplementedMtransform→ AsyncGenerator<RunOutput>
—

Default implementation of transform, which buffers input and then calls stream.

MwithConfig→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind config to a Runnable, returning a new Runnable.

MwithFallbacks→ RunnableWithFallbacks<RunInput, RunOutput>
—

Create a new runnable from the current one that will try invoking

MwithListeners→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind lifecycle listeners to a Runnable, returning a new Runnable.

MwithRetry→ RunnableRetry<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Add retry logic to an existing runnable.

MisRunnable→ thing is Runnable<any, any, RunnableConfig<Record<string, any>>>Mlc_name→ string
—

The name of the serializable. Override to provide an alias or

Inherited fromRunnable

Properties

Plc_kwargs: SerializedFieldsPlc_namespace: ["langchain_core", "callbacks", string]
—

A path to the module that contains the class, eg. ["langchain", "llms"]

Plc_runnablePlc_serializable: booleanPname: stringPlc_aliases: __type | undefinedPlc_attributes: __type | undefinedPlc_id: string[]Plc_secrets: __type | undefinedPlc_serializable_keys: string[] | undefined

Methods

M_batchWithConfig→ Promise<Error | RunOutput[]>
—

Internal method that handles batching and configuration for a runnable

M_callWithConfigM_getOptionsListM_separateRunnableConfigFromCallOptionsM_streamIterator→ AsyncGenerator<RunOutput>
—

Default streaming implementation.

M_streamLogM_transformStreamWithConfigMassign→ Runnable
—

Assigns new fields to the dict output of this runnable. Returns a new runnable.

MasTool→ RunnableToolLike<InteropZodType<ToolCall<string, Record<string, any>> | T>, RunOutput>
—

Convert a runnable to a tool. Return a new instance of RunnableToolLike

Mbatch→ Promise<RunOutput[]>
—

Default implementation of batch, which calls invoke N times.

MgetGraph→ GraphMgetName→ stringMinvoke→ Promise<RunOutput>
—

Method to invoke the document transformation. This method calls the

Mpick→ Runnable
—

Pick keys from the dict output of this runnable. Returns a new runnable.

Mpipe→ Runnable<RunInput, Exclude<NewRunOutput, Error>>
—

Create a new runnable sequence that runs each individual runnable in series,

Mstream→ Promise<IterableReadableStream<RunOutput>>
—

Stream output in chunks.

MstreamEvents→ IterableReadableStream<StreamEvent>
—

Generate a stream of events emitted by the internal steps of the runnable.

MstreamLog→ AsyncGenerator<RunLogPatch>
—

Stream all output from a runnable, as reported to the callback system.

MtoJSON→ SerializedMtoJSONNotImplemented→ SerializedNotImplementedMtransform→ AsyncGenerator<RunOutput>
—

Default implementation of transform, which buffers input and then calls stream.

MwithConfig→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind config to a Runnable, returning a new Runnable.

MwithFallbacks→ RunnableWithFallbacks<RunInput, RunOutput>
—

Create a new runnable from the current one that will try invoking

MwithListeners→ Runnable<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Bind lifecycle listeners to a Runnable, returning a new Runnable.

MwithRetry→ RunnableRetry<RunInput, RunOutput, RunnableConfig<Record<string, any>>>
—

Add retry logic to an existing runnable.

MisRunnable→ thing is Runnable<any, any, RunnableConfig<Record<string, any>>>Mlc_name→ string
—

The name of the serializable. Override to provide an alias or

Inherited fromSerializable

Properties

Plc_kwargs: SerializedFieldsPlc_namespace: ["langchain_core", "callbacks", string]
—

A path to the module that contains the class, eg. ["langchain", "llms"]

Plc_serializable: booleanPlc_aliases: __type | undefinedPlc_attributes: __type | undefinedPlc_id: string[]Plc_secrets: __type | undefinedPlc_serializable_keys: string[] | undefined

Methods

MtoJSON→ SerializedMtoJSONNotImplemented→ SerializedNotImplementedMlc_name→ string
—

The name of the serializable. Override to provide an alias or

View source on GitHub