LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Browser
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
  • Tools
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Structured Output
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Testing
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Standard Schema
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
BrowserUniversalHubNodeLoadSerializableEncoder BackedFile SystemIn MemoryTools
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileStructured OutputLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryTestingToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStandard SchemaStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/coretestingfakeModel
Function●Since v1.1

fakeModel

Creates a new FakeBuiltModel for testing.

Returns a chainable builder — queue responses, then pass the model anywhere a chat model is expected. Responses are consumed in FIFO order, one per invoke() call.

API summary

Method Description
fakeModel() Creates a new fake chat model. Returns a chainable builder.
.respond(message) Queue an AIMessage (or any BaseMessage) to return on the next invocation.
.respond(error) Queue an Error to throw on the next invocation.
.respond(factory) Queue a function (messages) => BaseMessage \| Error for dynamic responses.
.respondWithTools(toolCalls) Shorthand for .respond() with tool calls. Each entry needs name and args; id is optional.
.alwaysThrow(error) Make every invocation throw this error, regardless of the queue.
.structuredResponse(value) Set the value returned by .withStructuredOutput().
.bindTools(tools) Bind tools to the model. Returns a RunnableBinding that shares the response queue and call recording.
.withStructuredOutput(schema) Returns a runnable that produces the .structuredResponse() value.
.calls Array of { messages, options } for every invocation (read-only).
.callCount Number of times the model has been invoked.
Copy
fakeModel(): FakeBuiltModel

Used in Docs

  • Unit testing

Example

Copy
const model = fakeModel()
  .respondWithTools([{ name: "search", args: { query: "weather" } }])
  .respond(new AIMessage("Sunny and warm."));

const r1 = await model.invoke([new HumanMessage("What's the weather?")]);
// r1.tool_calls[0].name === "search"

const r2 = await model.invoke([new HumanMessage("Thanks")]);
// r2.content === "Sunny and warm."
View source on GitHub