LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Browser
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
  • Tools
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Structured Output
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Testing
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Standard Schema
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
BrowserUniversalHubNodeLoadSerializableEncoder BackedFile SystemIn MemoryTools
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileStructured OutputLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryTestingToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStandard SchemaStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainchat_modelsuniversalConfigurableChatModelCallOptions
Interfaceā—Since v1.0

ConfigurableChatModelCallOptions

Copy
interface ConfigurableChatModelCallOptions

Bases

BaseChatModelCallOptions

Properties

View source on GitHub
property
callbacks: Callbacks
property
configurable: Record<string, any>
property
ls_structured_output_format: __type
property
maxConcurrency: number
property
metadata: Record<string, unknown>
property
outputVersion: MessageOutputVersion
property
recursionLimit: number
property
runId: string
property
runName: string
property
signal: AbortSignal
property
stop: string[]
property
tags: string[]
property
timeout: number
property
tool_choice: ToolChoice
property
tools: StructuredToolInterface<ToolInputSchemaBase, any, any> | Record<string, unknown> | ToolDefinition | RunnableToolLike<InteropZodType, unknown>[]

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.

Describes the format of structured outputs. This should be provided if an output is considered to be structured

Maximum number of parallel calls to make.

Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.

Version of AIMessage output format to store in message content.

AIMessage.contentBlocks will lazily parse the contents of content into a standard format. This flag can be used to additionally store the standard format as the message content, e.g., for serialization purposes.

  • "v0": provider-specific format in content (can lazily parse with .contentBlocks)
  • "v1": standardized format in content (consistent with .contentBlocks)

You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to enable this by default.

Maximum number of times a call can recurse. If not provided, defaults to 25.

Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.

Name for the tracer run for this call. Defaults to the name of the class.

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.

Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.

Timeout for this call in milliseconds.

Specifies how the chat model should use tools.