LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Browser
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
  • Tools
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Structured Output
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Testing
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Standard Schema
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
BrowserUniversalHubNodeLoadSerializableEncoder BackedFile SystemIn MemoryTools
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileStructured OutputLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryTestingToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStandard SchemaStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexCreateAgentParams
Typeā—Since v1.0

CreateAgentParams

Copy
CreateAgentParams

Properties

View source on GitHub
property
checkpointer: BaseCheckpointSaver | boolean
property
contextSchema: ContextSchema
property
description: string
property
includeAgentName: "inline"
property
middleware: readonly AgentMiddleware[]
property
model: string | AgentLanguageModelLike
property
name: string
property
responseFormat: ResponseFormatType
property
signal: AbortSignal
property
stateSchema: TStateSchema
property
store: BaseStore
property
tools: ServerTool | ClientTool[]
property
version: "v1" | "v2"
deprecatedproperty
systemPrompt: string | SystemMessage

The schema of the middleware context. Middleware context is read-only and not persisted between multiple invocations. It can be either:

  • A Zod object
  • A Zod optional object
  • A Zod default object
  • Undefined

A description of the tool.

Use to specify how to expose the agent name to the underlying supervisor LLM.

  • undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this.
  • "inline": Add the agent name directly into the content field of the AIMessage using XML-style tags. Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"

Middleware instances to run during agent execution. Each middleware can define its own state schema and hook into the agent lifecycle.

The name of the tool being called

The tool response format.

If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

The schema of the middleware state. Middleware state is persisted between multiple invocations. It can be either:

  • A Zod object (InteropZodObject)
  • A StateSchema from LangGraph (supports ReducedValue, UntrackedValue)
  • An AnnotationRoot
  • Undefined

Determines the version of the graph to create.

Can be one of

  • "v1": The tool node processes the full AIMessage containing all tool calls. All tool calls are executed concurrently via Promise.all inside a single graph node. Choose v1 when your tools invoke sub-graphs or other long-running async work and you need true parallelism — the Promise.all approach is unaffected by LangGraph's per-task checkpoint serialisation.

  • "v2": Each tool call is dispatched as an independent graph task using the Send API. Tasks are scheduled in parallel by LangGraph, but when tools invoke sub-graphs the underlying checkpoint writes can cause effective serialisation, making concurrent tool calls execute sequentially. v2 is the better choice when you need per-tool-call checkpointing, independent fault isolation, or interrupt() support inside individual tool calls.

The system message string for this step.