LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexCreateAgentParams
Typeā—Since v1.1

CreateAgentParams

Copy
CreateAgentParams

Properties

View source on GitHub
property
checkpointer: BaseCheckpointSaver | boolean
property
contextSchema: ContextSchema
property
description: string
property
includeAgentName: "inline"
property
middleware: readonly AgentMiddleware[]
property
model: string | LanguageModelLike
property
name: string
property
responseFormat: ResponseFormatType
property
signal: AbortSignal
property
stateSchema: TStateSchema
property
store: BaseStore
property
tools: ServerTool | ClientTool[]
property
version: "v1" | "v2"
deprecatedproperty
systemPrompt: string | SystemMessage

An optional checkpoint saver to persist the agent's state.

The schema of the middleware context. Middleware context is read-only and not persisted between multiple invocations. It can be either:

  • A Zod object
  • A Zod optional object
  • A Zod default object
  • Undefined

A description of the tool.

Use to specify how to expose the agent name to the underlying supervisor LLM.

  • undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this.
  • "inline": Add the agent name directly into the content field of the AIMessage using XML-style tags. Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"

Middleware instances to run during agent execution. Each middleware can define its own state schema and hook into the agent lifecycle.

The name of the tool being called

The tool response format.

If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

The schema of the middleware state. Middleware state is persisted between multiple invocations. It can be either:

  • A Zod object (InteropZodObject)
  • A StateSchema from LangGraph (supports ReducedValue, UntrackedValue)
  • An AnnotationRoot
  • Undefined

An optional store to persist the agent's state.

Determines the version of the graph to create.

Can be one of

  • "v1": The tool node processes a single message. All tool calls in the message are executed in parallel within the tool node.
  • "v2": The tool node processes a single tool call. Tool calls are distributed across multiple instances of the tool node using the Send API.

The system message string for this step.