langchain.js
    Preparing search index...

    Type Alias CreateAgentParams<StructuredResponseType, StateSchema, ContextSchema, ResponseFormatType>

    type CreateAgentParams<
        StructuredResponseType extends Record<string, any> = Record<string, any>,
        StateSchema extends AnyAnnotationRoot | BaseMessage | undefined = undefined,
        ContextSchema extends AnyAnnotationRoot | BaseMessage = AnyAnnotationRoot,
        ResponseFormatType =
            | BaseMessage<StructuredResponseType>
            | BaseMessage<unknown>[]
            | JsonSchemaFormat
            | JsonSchemaFormat[]
            | ResponseFormat
            | TypedToolStrategy<StructuredResponseType>
            | ToolStrategy<StructuredResponseType>
            | ProviderStrategy<StructuredResponseType>
            | ResponseFormatUndefined,
    > = {
        checkpointer?: BaseCheckpointSaver
        | boolean;
        contextSchema?: ContextSchema;
        description?: string;
        includeAgentName?: "inline";
        middleware?: readonly AgentMiddleware<any, any, any>[];
        model: string | BaseMessage;
        name?: string;
        responseFormat?: ResponseFormatType;
        signal?: AbortSignal;
        stateSchema?: StateSchema;
        store?: BaseStore;
        systemPrompt?: string;
        tools?: (BaseMessage | BaseMessage)[];
        version?: "v1" | "v2";
    }

    Type Parameters

    Index

    Properties

    checkpointer?: BaseCheckpointSaver | boolean

    An optional checkpoint saver to persist the agent's state.

    contextSchema?: ContextSchema

    An optional schema for the context. It allows to pass in a typed context object into the agent invocation and allows to access it in hooks such as prompt and middleware. As opposed to the agent state, defined in stateSchema, the context is not persisted between agent invocations.

    const agent = createAgent({
    llm: model,
    tools: [getWeather],
    contextSchema: z.object({
    capital: z.string(),
    }),
    prompt: (state, config) => {
    return [
    new SystemMessage(`You are a helpful assistant. The capital of France is ${config.context.capital}.`),
    ];
    },
    });

    const result = await agent.invoke({
    messages: [
    new SystemMessage("You are a helpful assistant."),
    new HumanMessage("What is the capital of France?"),
    ],
    }, {
    context: {
    capital: "Paris",
    },
    });
    description?: string

    An optional description for the agent. This can be used to describe the agent to the underlying supervisor LLM.

    includeAgentName?: "inline"

    Use to specify how to expose the agent name to the underlying supervisor LLM.

    • undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this.
    • "inline": Add the agent name directly into the content field of the AIMessage using XML-style tags. Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"
    middleware?: readonly AgentMiddleware<any, any, any>[]

    Middleware instances to run during agent execution. Each middleware can define its own state schema and hook into the agent lifecycle.

    model: string | BaseMessage

    Defines a model to use for the agent. You can either pass in an instance of a LangChain chat model or a string. If a string is provided the agent initializes a ChatModel based on the provided model name and provider. It supports various model providers and allows for runtime configuration of model parameters.

    const agent = createAgent({
    model: "anthropic:claude-3-7-sonnet-latest",
    // ...
    });
    import { ChatOpenAI } from "@langchain/openai";
    const agent = createAgent({
    model: new ChatOpenAI({ model: "gpt-4o" }),
    // ...
    });
    name?: string

    An optional name for the agent.

    responseFormat?: ResponseFormatType

    An optional schema for the final agent output.

    If provided, output will be formatted to match the given schema and returned in the 'structuredResponse' state key. If not provided, structuredResponse will not be present in the output state.

    Can be passed in as:

    • Zod schema
      const agent = createAgent({
      responseFormat: z.object({
      capital: z.string(),
      }),
      // ...
      });
    • JSON schema
      const agent = createAgent({
      responseFormat: {
      type: "json_schema",
      schema: {
      type: "object",
      properties: {
      capital: { type: "string" },
      },
      required: ["capital"],
      },
      },
      // ...
      });
    • Create React Agent ResponseFormat
      import { providerStrategy, toolStrategy } from "langchain";
      const agent = createAgent({
      responseFormat: providerStrategy(
      z.object({
      capital: z.string(),
      })
      ),
      // or
      responseFormat: [
      toolStrategy({ ... }),
      toolStrategy({ ... }),
      ]
      // ...
      });

    Note: The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.

    signal?: AbortSignal

    An optional abort signal that indicates that the overall operation should be aborted.

    stateSchema?: StateSchema

    An optional schema for the agent state. It allows you to define custom state properties that persist across agent invocations and can be accessed in hooks, middleware, and throughout the agent's execution. The state is persisted when using a checkpointer and can be updated by middleware or during execution.

    As opposed to the context (defined in contextSchema), the state is persisted between agent invocations when using a checkpointer, making it suitable for maintaining conversation history, user preferences, or any other data that should persist across multiple interactions.

    import { z } from "zod";
    import { createAgent } from "@langchain/langgraph";

    const agent = createAgent({
    model: "openai:gpt-4o",
    tools: [getWeather],
    stateSchema: z.object({
    userPreferences: z.object({
    temperatureUnit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
    location: z.string().optional(),
    }).optional(),
    conversationCount: z.number().default(0),
    }),
    prompt: (state, config) => {
    const unit = state.userPreferences?.temperatureUnit || "celsius";
    return [
    new SystemMessage(`You are a helpful assistant. Use ${unit} for temperature.`),
    ];
    },
    });

    const result = await agent.invoke({
    messages: [
    new HumanMessage("What's the weather like?"),
    ],
    userPreferences: {
    temperatureUnit: "fahrenheit",
    location: "New York",
    },
    conversationCount: 1,
    });
    store?: BaseStore

    An optional store to persist the agent's state.

    systemPrompt?: string

    An optional system message for the model.

    tools?: (BaseMessage | BaseMessage)[]

    A list of tools or a ToolNode.

    import { tool } from "langchain";

    const weatherTool = tool(() => "Sunny!", {
    name: "get_weather",
    description: "Get the weather for a location",
    schema: z.object({
    location: z.string().describe("The location to get weather for"),
    }),
    });

    const agent = createAgent({
    tools: [weatherTool],
    // ...
    });
    version?: "v1" | "v2"

    Determines the version of the graph to create.

    Can be one of

    • "v1": The tool node processes a single message. All tool calls in the message are executed in parallel within the tool node.
    • "v2": The tool node processes a single tool call. Tool calls are distributed across multiple instances of the tool node using the Send API.

    "v2"