interface ChatXAICallOptionsCallbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Describes the format of structured outputs. This should be provided if an output is considered to be structured
Maximum number of parallel calls to make.
Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.
Version of AIMessage output format to store in message content.
AIMessage.contentBlocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
as the message content, e.g., for serialization purposes.
.contentBlocks).contentBlocks)You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to
enable this by default.
Maximum number of times a call can recurse. If not provided, defaults to 25.
Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Name for the tracer run for this call. Defaults to the name of the class.
Search parameters for xAI's Live Search API. Enables the model to search the web for real-time information.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.
Timeout for this call in milliseconds.
Specifies how the chat model should use tools.
A list of tools the model may call.
Can include standard function tools and xAI built-in tools like { type: "live_search" }.
const result = await llm.invoke("What's the latest news?", {
searchParameters: {
mode: "auto",
max_search_results: 5,
}
});// Using built-in live_search tool
const llm = new ChatXAI().bindTools([{ type: "live_search" }]);
const result = await llm.invoke("What happened in tech news today?");