OptionalcheckpointerOptionalcontextAn optional schema for the context. It allows to pass in a typed context object into the agent
invocation and allows to access it in hooks such as prompt and middleware.
As opposed to the agent state, defined in stateSchema, the context is not persisted between
agent invocations.
const agent = createAgent({
llm: model,
tools: [getWeather],
contextSchema: z.object({
capital: z.string(),
}),
prompt: (state, config) => {
return [
new SystemMessage(`You are a helpful assistant. The capital of France is ${config.context.capital}.`),
];
},
});
const result = await agent.invoke({
messages: [
new SystemMessage("You are a helpful assistant."),
new HumanMessage("What is the capital of France?"),
],
}, {
context: {
capital: "Paris",
},
});
OptionaldescriptionAn optional description for the agent. This can be used to describe the agent to the underlying supervisor LLM.
OptionalincludeUse to specify how to expose the agent name to the underlying supervisor LLM.
undefined: Relies on the LLM provider AIMessage#name. Currently, only OpenAI supports this."inline": Add the agent name directly into the content field of the AIMessage using XML-style tags.
Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>"OptionalmiddlewareMiddleware instances to run during agent execution. Each middleware can define its own state schema and hook into the agent lifecycle.
Defines a model to use for the agent. You can either pass in an instance of a LangChain chat model or a string. If a string is provided the agent initializes a ChatModel based on the provided model name and provider. It supports various model providers and allows for runtime configuration of model parameters.
OptionalnameAn optional name for the agent.
OptionalresponseAn optional schema for the final agent output.
If provided, output will be formatted to match the given schema and returned in the 'structuredResponse' state key.
If not provided, structuredResponse will not be present in the output state.
Can be passed in as:
const agent = createAgent({
responseFormat: z.object({
capital: z.string(),
}),
// ...
});
const agent = createAgent({
responseFormat: {
type: "json_schema",
schema: {
type: "object",
properties: {
capital: { type: "string" },
},
required: ["capital"],
},
},
// ...
});
import { providerStrategy, toolStrategy } from "langchain";
const agent = createAgent({
responseFormat: providerStrategy(
z.object({
capital: z.string(),
})
),
// or
responseFormat: [
toolStrategy({ ... }),
toolStrategy({ ... }),
]
// ...
});
Note: The graph will make a separate call to the LLM to generate the structured response after the agent loop is finished. This is not the only strategy to get structured responses, see more options in this guide.
OptionalsignalAn optional abort signal that indicates that the overall operation should be aborted.
OptionalstateAn optional schema for the agent state. It allows you to define custom state properties that persist across agent invocations and can be accessed in hooks, middleware, and throughout the agent's execution. The state is persisted when using a checkpointer and can be updated by middleware or during execution.
As opposed to the context (defined in contextSchema), the state is persisted between agent invocations
when using a checkpointer, making it suitable for maintaining conversation history, user preferences,
or any other data that should persist across multiple interactions.
import { z } from "zod";
import { createAgent } from "@langchain/langgraph";
const agent = createAgent({
model: "openai:gpt-4o",
tools: [getWeather],
stateSchema: z.object({
userPreferences: z.object({
temperatureUnit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
location: z.string().optional(),
}).optional(),
conversationCount: z.number().default(0),
}),
prompt: (state, config) => {
const unit = state.userPreferences?.temperatureUnit || "celsius";
return [
new SystemMessage(`You are a helpful assistant. Use ${unit} for temperature.`),
];
},
});
const result = await agent.invoke({
messages: [
new HumanMessage("What's the weather like?"),
],
userPreferences: {
temperatureUnit: "fahrenheit",
location: "New York",
},
conversationCount: 1,
});
OptionalstoreAn optional store to persist the agent's state.
OptionalsystemAn optional system message for the model.
OptionaltoolsA list of tools or a ToolNode.
import { tool } from "langchain";
const weatherTool = tool(() => "Sunny!", {
name: "get_weather",
description: "Get the weather for a location",
schema: z.object({
location: z.string().describe("The location to get weather for"),
}),
});
const agent = createAgent({
tools: [weatherTool],
// ...
});
OptionalversionDetermines the version of the graph to create.
Can be one of
"v1": The tool node processes a single message. All tool calls in the message are
executed in parallel within the tool node."v2": The tool node processes a single tool call. Tool calls are distributed across
multiple instances of the tool node using the Send API.
An optional checkpoint saver to persist the agent's state.