interface GoogleGenerativeAIChatCallOptionsAllowed functions to call when the mode is "any". If empty, any one of the provided functions are called.
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Describes the format of structured outputs. This should be provided if an output is considered to be structured
Maximum number of parallel calls to make.
Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.
Version of AIMessage output format to store in message content.
AIMessage.contentBlocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
as the message content, e.g., for serialization purposes.
.contentBlocks).contentBlocks)You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to
enable this by default.
Maximum number of times a call can recurse. If not provided, defaults to 25.
JSON schema to be returned by the model.
Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Name for the tracer run for this call. Defaults to the name of the class.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Whether or not to include usage data, like token counts in the streamed response chunks.
Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.
Timeout for this call in milliseconds.
Specifies how the chat model should use tools.