class FakeStreamingLLMInternal method that handles batching and configuration for a runnable
Create a unique cache key for a specific call to a specific language model.
Get the identifying parameters of the LLM.
Default streaming implementation.
Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
Default implementation of batch, which calls invoke N times.
Generates chat based on the input messages.
Get the number of tokens in the content.
Get the parameters used to invoke the model
Method to invoke the document transformation. This method calls the
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series,
Stream output in chunks.
Generate a stream of events emitted by the internal steps of the runnable.
Stream all output from a runnable, as reported to the callback system.
Default implementation of transform, which buffers input and then calls stream.
Bind config to a Runnable, returning a new Runnable.
Create a new runnable from the current one that will try invoking
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Add retry logic to an existing runnable.
Model wrapper that returns outputs formatted to match the given schema.
The name of the serializable. Override to provide an alias or
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
The async caller should be used by subclasses to make any async calls,
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
The async caller should be used by subclasses to make any async calls,
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
A path to the module that contains the class, eg. ["langchain", "llms"]
A path to the module that contains the class, eg. ["langchain", "llms"]
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.
LLM class that provides a simpler interface to subclass than BaseLLM.
Requires only implementing a simpler _call method instead of _generate.
Get the identifying parameters of the LLM.
Default streaming implementation.
Default streaming implementation.
Generate a stream of events emitted by the internal steps of the runnable.
Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results.
A StreamEvent is a dictionary with the following schema:
event: string - Event names are of the format: on_[runnable_type]_(start|stream|end).name: string - The name of the runnable that generated the event.run_id: string - Randomly generated ID associated with the given execution of
the runnable that emitted the event. A child runnable that gets invoked as part of the execution of a
parent runnable is assigned its own unique ID.tags: string[] - The tags of the runnable that generated the event.metadata: Record<string, any> - The metadata of the runnable that generated the event.data: Record<string, any>Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
ATTENTION This reference table is for the V2 version of the schema.
+----------------------+-----------------------------+------------------------------------------+
| event | input | output/chunk |
+======================+=============================+==========================================+
| on_chat_model_start | {"messages": BaseMessage[]} | |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_stream | | AIMessageChunk("hello") |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_end | {"messages": BaseMessage[]} | AIMessageChunk("hello world") |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_start | {'input': 'hello'} | |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_stream | | 'Hello' |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_end | 'Hello human!' | |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_start | | |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_stream | | "hello world!" |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_end | [Document(...)] | "hello world!, goodbye world!" |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_start | {"x": 1, "y": "2"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_end | | {"x": 1, "y": "2"} |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_start | {"query": "hello"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_end | {"query": "hello"} | [Document(...), ..] |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_start | {"question": "hello"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_end | {"question": "hello"} | ChatPromptValue(messages: BaseMessage[]) |
+----------------------+-----------------------------+------------------------------------------+
The "on_chain_*" events are the default for Runnables that don't fit one of the above categories.
In addition to the standard events above, users can also dispatch custom events.
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
+-----------+------+------------------------------------------------------------+
| Attribute | Type | Description |
+===========+======+============================================================+
| name | str | A user defined name for the event. |
+-----------+------+------------------------------------------------------------+
| data | Any | The data associated with the event. This can be anything. |
+-----------+------+------------------------------------------------------------+
Here's an example:
import { RunnableLambda } from "@langchain/core/runnables";
import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch";
// Use this import for web environments that don't support "async_hooks"
// and manually pass config to child runs.
// import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch/web";
const slowThing = RunnableLambda.from(async (someInput: string) => {
// Placeholder for some slow operation
await new Promise((resolve) => setTimeout(resolve, 100));
await dispatchCustomEvent("progress_event", {
message: "Finished step 1 of 2",
});
await new Promise((resolve) => setTimeout(resolve, 100));
return "Done";
});
const eventStream = await slowThing.streamEvents("hello world", {
version: "v2",
});
for await (const event of eventStream) {
if (event.event === "on_custom_event") {
console.log(event);
}
}