LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainchat_modelsuniversalConfigurableChatModelCallOptions
Interfaceā—Since v1.1

ConfigurableChatModelCallOptions

Copy
interface ConfigurableChatModelCallOptions

Bases

BaseChatModelCallOptions

Properties

property
callbacks: Callbacks

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

property
configurable: Record<string, any>

Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.

property
ls_structured_output_format: __type

Describes the format of structured outputs. This should be provided if an output is considered to be structured

property
maxConcurrency: number

Maximum number of parallel calls to make.

property
metadata: Record<string, unknown>

Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.

property
outputVersion: MessageOutputVersion

Version of AIMessage output format to store in message content.

AIMessage.contentBlocks will lazily parse the contents of content into a standard format. This flag can be used to additionally store the standard format as the message content, e.g., for serialization purposes.

  • "v0": provider-specific format in content (can lazily parse with .contentBlocks)
  • "v1": standardized format in content (consistent with .contentBlocks)

You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to enable this by default.

property
recursionLimit: number

Maximum number of times a call can recurse. If not provided, defaults to 25.

property
runId: string

Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.

property
runName: string

Name for the tracer run for this call. Defaults to the name of the class.

property
signal: AbortSignal

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

property
stop: string[]

Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.

property
tags: string[]

Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.

property
timeout: number

Timeout for this call in milliseconds.

property
tool_choice: ToolChoice

Specifies how the chat model should use tools.

property
tools: StructuredToolInterface<ToolInputSchemaBase, any, any> | Record<string, unknown> | ToolDefinition | RunnableToolLike<InteropZodType, unknown>[]
View source on GitHub