LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexFakeToolCallingModel
Class●Since v1.1

FakeToolCallingModel

Fake chat model for testing tool calling functionality

Copy
class FakeToolCallingModel

Bases

BaseChatModel

Constructors

Properties

Methods

Inherited fromBaseChatModel(langchain_core)

Attributes

Arate_limiterAdisable_streamingAoutput_versionAmodel_config
View source on GitHub
A
OutputType

Methods

MainvokeMastreamMagenerateMgenerate_promptMagenerate_promptMdictMbind_toolsMwith_structured_output

Inherited fromBaseLanguageModel(langchain_core)

Attributes

Acustom_get_token_idsAmodel_configAInputType

Methods

Mset_verboseMgenerate_promptMagenerate_promptMwith_structured_outputMget_token_idsMget_num_tokensMget_num_tokens_from_messages

Inherited fromRunnableSerializable(langchain_core)

Attributes

Amodel_config

Methods

Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

Inherited fromSerializable

Properties

Plc_kwargs: SerializedFieldsPlc_namespace: string[]
—

A path to the module that contains the class, eg. ["langchain", "llms"]

Plc_serializable: booleanPlc_aliases: Record<string, string>Plc_attributes: SerializedFields | undefinedPlc_id: string[]Plc_secrets: __type | undefinedPlc_serializable_keys: string[] | undefined

Methods

MtoJSON→ SerializedMtoJSONNotImplemented→ SerializedNotImplementedMlc_name→ string
—

The name of the serializable. Override to provide an alias or

Inherited fromRunnable(langchain_core)

Attributes

AInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

Methods

Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschema
constructor
constructor
property
cache: BaseCache<Generation[]>
property
callbacks: Callbacks

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

property
caller: AsyncCaller

The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.

property
disableStreaming: boolean
property
lc_kwargs: SerializedFields
property
lc_namespace: string[]

A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.

property
lc_runnable: boolean
property
lc_serializable: boolean
property
metadata: Record<string, unknown>

Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.

property
name: string

The name of the tool being called

property
outputVersion: MessageOutputVersion

Version of AIMessage output format to store in message content.

AIMessage.contentBlocks will lazily parse the contents of content into a standard format. This flag can be used to additionally store the standard format as the message content, e.g., for serialization purposes.

  • "v0": provider-specific format in content (can lazily parse with .contentBlocks)
  • "v1": standardized format in content (consistent with .contentBlocks)

You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to enable this by default.

property
ParsedCallOptions: Omit<CallOptions, Exclude<keyof RunnableConfig, "signal" | "timeout" | "maxConcurrency">>
property
structuredResponse: any
property
tags: string[]

Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.

property
toolCalls: ToolCall[][]
property
toolStyle: "openai" | "anthropic"
property
verbose: boolean

Whether to print out response text.

property
callKeys: string[]
property
index: string | number

Index of block in aggregate response

property
lc_aliases: Record<string, string>
property
lc_attributes: SerializedFields | undefined
property
lc_id: string[]
property
lc_secrets: __type | undefined
property
lc_serializable_keys: string[] | undefined
property
profile: ModelProfile
method
_batchWithConfig→ Promise<Error | ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>[]>

Internal method that handles batching and configuration for a runnable It takes a function, input values, and optional configuration, and returns a promise that resolves to the output values.

method
_callWithConfig
method
_combineLLMOutput→ never[]
method
_generate→ Promise<ChatResult>
method
_generateCached→ Promise<LLMResult & __type>
method
_getOptionsList
method
_getSerializedCacheKeyParametersForCall→ string

Create a unique cache key for a specific call to a specific language model.

method
_identifyingParams→ Record<string, any>

Get the identifying parameters of the LLM.

method
_llmType→ string
method
_modelType→ string
method
_separateRunnableConfigFromCallOptions
method
_separateRunnableConfigFromCallOptionsCompat
method
_streamIterator→ AsyncGenerator<ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>>

Default streaming implementation. Subclasses should override this method if they support streaming output.

method
_streamLog
method
_streamResponseChunks→ AsyncGenerator<ChatGenerationChunk>
method
_transformStreamWithConfig
method
assign→ Runnable

Assigns new fields to the dict output of this runnable. Returns a new runnable.

method
asTool→ RunnableToolLike<InteropZodType<ToolCall<string, Record<string, any>> | T>, ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>>

Convert a runnable to a tool. Return a new instance of RunnableToolLike which contains the runnable, name, description and schema.

method
batch→ Promise<ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>[]>

Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.

method
bindTools→ FakeToolCallingModel | RunnableBinding<any, any, any>

Bind tool-like objects to this chat model.

method
generate→ Promise<LLMResult>

Generates chat based on the input messages.

method
generatePrompt→ Promise<LLMResult>

Generates a prompt based on the input prompt values.

method
getGraph→ Graph
method
getLsParams→ LangSmithParams
method
getName→ string
method
getNumTokens→ Promise<number>

Get the number of tokens in the content.

method
invocationParams→ any

Get the parameters used to invoke the model

method
invoke→ Promise<ToolReturnType<TInput, TConfig, ToolOutputT>>

Invokes the tool with the provided input and configuration.

method
pick→ Runnable

Pick keys from the dict output of this runnable. Returns a new runnable.

method
pipe→ Runnable<StructuredToolCallInput<SchemaT, SchemaInputT>, Exclude<NewRunOutput, Error>>

Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.

method
stream→ Promise<IterableReadableStream<ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>>>

Stream output in chunks.

method
streamEvents→ IterableReadableStream<StreamEvent>

Generate a stream of events emitted by the internal steps of the runnable.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: string - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: string - The name of the runnable that generated the event.
  • run_id: string - Randomly generated ID associated with the given execution of the runnable that emitted the event. A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID.
  • tags: string[] - The tags of the runnable that generated the event.
  • metadata: Record<string, any> - The metadata of the runnable that generated the event.
  • data: Record<string, any>

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

+----------------------+-----------------------------+------------------------------------------+
| event                | input                       | output/chunk                             |
+======================+=============================+==========================================+
| on_chat_model_start  | {"messages": BaseMessage[]} |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_stream |                             | AIMessageChunk("hello")                  |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_end    | {"messages": BaseMessage[]} | AIMessageChunk("hello world")            |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_start         | {'input': 'hello'}          |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_stream        |                             | 'Hello'                                  |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_end           | 'Hello human!'              |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_start       |                             |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_stream      |                             | "hello world!"                           |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_end         | [Document(...)]             | "hello world!, goodbye world!"           |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_start        | {"x": 1, "y": "2"}          |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_end          |                             | {"x": 1, "y": "2"}                       |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_start   | {"query": "hello"}          |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_end     | {"query": "hello"}          | [Document(...), ..]                      |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_start      | {"question": "hello"}       |                                          |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_end        | {"question": "hello"}       | ChatPromptValue(messages: BaseMessage[]) |
+----------------------+-----------------------------+------------------------------------------+

The "on_chain_*" events are the default for Runnables that don't fit one of the above categories.

In addition to the standard events above, users can also dispatch custom events.

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+------------------------------------------------------------+
| Attribute | Type | Description                                                |
+===========+======+============================================================+
| name      | str  | A user defined name for the event.                         |
+-----------+------+------------------------------------------------------------+
| data      | Any  | The data associated with the event. This can be anything.  |
+-----------+------+------------------------------------------------------------+

Here's an example:

import { RunnableLambda } from "@langchain/core/runnables";
import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch";
// Use this import for web environments that don't support "async_hooks"
// and manually pass config to child runs.
// import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch/web";

const slowThing = RunnableLambda.from(async (someInput: string) => {
  // Placeholder for some slow operation
  await new Promise((resolve) => setTimeout(resolve, 100));
  await dispatchCustomEvent("progress_event", {
   message: "Finished step 1 of 2",
 });
 await new Promise((resolve) => setTimeout(resolve, 100));
 return "Done";
});

const eventStream = await slowThing.streamEvents("hello world", {
  version: "v2",
});

for await (const event of eventStream) {
 if (event.event === "on_custom_event") {
   console.log(event);
 }
}
method
streamLog→ AsyncGenerator<RunLogPatch>

Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

method
toJSON→ Serialized
method
toJSONNotImplemented→ SerializedNotImplemented
method
transform→ AsyncGenerator<ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>>

Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

method
withConfig→ Runnable<StructuredToolCallInput<SchemaT, SchemaInputT>, ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>, RunnableConfig<Record<string, any>>>

Bind config to a Runnable, returning a new Runnable.

method
withFallbacks→ RunnableWithFallbacks<StructuredToolCallInput<SchemaT, SchemaInputT>, ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>>

Create a new runnable from the current one that will try invoking other passed fallback runnables if the initial invocation fails.

method
withListeners→ Runnable<StructuredToolCallInput<SchemaT, SchemaInputT>, ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>, RunnableConfig<Record<string, any>>>

Bind lifecycle listeners to a Runnable, returning a new Runnable. The Run object contains information about the run, including its id, type, input, output, error, startTime, endTime, and any tags or metadata added to the run.

method
withRetry→ RunnableRetry<StructuredToolCallInput<SchemaT, SchemaInputT>, ToolOutputT | ToolMessage<MessageStructure<MessageToolSet>>, RunnableConfig<Record<string, any>>>

Add retry logic to an existing runnable.

method
withStructuredOutput→ RunnableLambda<unknown, any, RunnableConfig<Record<string, any>>>
method
_convertInputToPromptValue
method
isRunnable→ thing is Runnable<any, any, RunnableConfig<Record<string, any>>>
method
lc_name→ string

The name of the serializable. Override to provide an alias or to preserve the serialized module name in minified environments.

Implemented as a static method to support loading logic.

deprecatedmethod
serialize→ SerializedLLM
deprecatedmethod
deserialize→ Promise<BaseLanguageModel<any, BaseLanguageModelCallOptions>>
M
config_schema
Mget_config_jsonschema
Mget_graph
Mget_prompts
Mainvoke
Mbatch_as_completed
Mabatch
Mabatch_as_completed
Mastream
Mastream_log
Mastream_events
Matransform
Mbind
Mwith_config
Mwith_listeners
Mwith_alisteners
Mwith_types
Mwith_retry
Mmap
Mwith_fallbacks
Mas_tool