class FakeBuiltModelInternal method that handles batching and configuration for a runnable
Create a unique cache key for a specific call to a specific language model.
Get the identifying parameters of the LLM.
Default streaming implementation.
Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
Default implementation of batch, which calls invoke N times.
Bind tool-like objects to this chat model.
Generates chat based on the input messages.
Wraps getLsParams() and always appends ls_integration.
Get the number of tokens in the content.
Get the parameters used to invoke the model
Method to invoke the document transformation. This method calls the
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series,
Stream output in chunks.
Generate a stream of events emitted by the internal steps of the runnable.
Stream all output from a runnable, as reported to the callback system.
Default implementation of transform, which buffers input and then calls stream.
Bind config to a Runnable, returning a new Runnable.
Create a new runnable from the current one that will try invoking
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Add retry logic to an existing runnable.
Model wrapper that returns outputs formatted to match the given schema.
The name of the serializable. Override to provide an alias or
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
The async caller should be used by subclasses to make any async calls,
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
A path to the module that contains the class, eg. ["langchain", "llms"]
A path to the module that contains the class, eg. ["langchain", "llms"]
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.
A fake chat model for testing, created via fakeModel.
Queue responses with .respond() and .respondWithTools(), then
pass the instance directly wherever a chat model is expected.
Responses are consumed in first-in-first-out order — one per invoke() call.
When all queued responses are consumed, further invocations throw.
Create a unique cache key for a specific call to a specific language model.
Default streaming implementation.
Default streaming implementation.