A tool that can be created dynamically from a function, name, and description.
class DynamicToolTool<ToolOutputT>Optional provider-specific extra fields for the tool.
Internal method that handles batching and configuration for a runnable
Default streaming implementation.
Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
Default implementation of batch, which calls invoke N times.
Invokes the tool with the provided input and configuration.
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series,
Stream output in chunks.
Generate a stream of events emitted by the internal steps of the runnable.
Stream all output from a runnable, as reported to the callback system.
Default implementation of transform, which buffers input and then calls stream.
Bind config to a Runnable, returning a new Runnable.
Create a new runnable from the current one that will try invoking
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Add retry logic to an existing runnable.
The name of the serializable. Override to provide an alias or
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM).
Default config object for the tool runnable.
A description of the tool.
Optional provider-specific extra fields for the tool.
Metadata for this call and any sub-calls (eg. a Chain calling an LLM).
The name of the tool being called
The tool response format.
Whether to return the tool's output directly.
A Zod schema representing the parameters of the tool.
Tags for this call and any sub-calls (eg. a Chain calling an LLM).
Whether to print out response text.
A path to the module that contains the class, eg. ["langchain", "llms"]
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Default config object for the tool runnable.
A description of the tool.
The name of the tool being called
The tool response format.
If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.
Whether to return the tool's output directly.
Setting this to true means that after the tool is called, an agent should stop looping.
Whether to print out response text.
A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.