LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/mcp-adaptersLoadMcpToolsOptions
Typeā—Since v1.0

LoadMcpToolsOptions

Copy
LoadMcpToolsOptions

Properties

View source on GitHub
property
additionalToolNamePrefix: string
property
afterToolCall: ToolHooks["afterToolCall"]
property
beforeToolCall: ToolHooks["beforeToolCall"]
property
defaultToolTimeout: number
property
onProgress: Notifications["onProgress"]
property
outputHandling: OutputHandling
property
prefixToolNameWithServerName: boolean
property
throwOnLoadError: boolean
property
useStandardContentBlocks: boolean

An additional prefix to add to the tool name. Will be added at the very beginning of the tool name, separated by a double underscore.

For example, if additionalToolNamePrefix is "mcp", and prefixToolNameWithServerName is true, the tool name "my-tool" provided by server "my-server" will become "mcp__my-server__my-tool".

Similarly, if additionalToolNamePrefix is mcp and prefixToolNameWithServerName is false, the tool name would be "mcp__my-tool".

afterToolCall callbacks used for tool calls.

beforeToolCall callbacks used for tool calls.

Default timeout in milliseconds for tool execution. Must be greater than 0. If not specified, tools will use their own configured timeout values.

onProgress callbacks used for tool calls.

Defines where to place each tool output type in the LangChain ToolMessage.

If true, the tool name will be prefixed with the server name followed by a double underscore. This is useful if you want to avoid tool name collisions across servers.

If true, throw an error if a tool fails to load.

If true, the tool will use LangChain's standard multimodal content blocks for tools that output image or audio content, and embedded resources will be converted to StandardFileBlock objects. When false, all artifacts are left in their MCP format, but embedded resources will be converted to StandardFileBlock objects if outputHandling causes embedded resources to be treated as content, as otherwise ChatModel providers will not be able to interpret them.