A tool that can be created dynamically from a function, name, and description.
class DynamicToolTool<ToolOutputT>Internal method that handles batching and configuration for a runnable
Default streaming implementation.
Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
Default implementation of batch, which calls invoke N times.
Method to invoke the document transformation. This method calls the
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series,
Stream output in chunks.
Generate a stream of events emitted by the internal steps of the runnable.
Stream all output from a runnable, as reported to the callback system.
Default implementation of transform, which buffers input and then calls stream.
Bind config to a Runnable, returning a new Runnable.
Create a new runnable from the current one that will try invoking
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Add retry logic to an existing runnable.
The name of the serializable. Override to provide an alias or
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
Default config object for the tool runnable.
A description of what the function does, used by the model to choose when and
Optional provider-specific extra fields for the tool.
The tool response format.
Whether to return the tool's output directly.
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
A path to the module that contains the class, eg. ["langchain", "llms"]
A path to the module that contains the class, eg. ["langchain", "llms"]
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Default config object for the tool runnable.
A description of what the function does, used by the model to choose when and how to call the function.
The tool response format.
If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.
Whether to return the tool's output directly.
Setting this to true means that after the tool is called, an agent should stop looping.
A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.
Default streaming implementation.
Default streaming implementation.