Configuration options for the summarization middleware
A middleware instance
import { summarizationMiddleware } from "langchain";
import { createAgent } from "langchain";
// Single condition: trigger if tokens >= 4000 AND messages >= 10
const agent1 = createAgent({
llm: model,
tools: [getWeather],
middleware: [
summarizationMiddleware({
model: new ChatOpenAI({ model: "gpt-4o" }),
trigger: { tokens: 4000, messages: 10 },
keep: { messages: 20 },
})
],
});
// Multiple conditions: trigger if (tokens >= 5000 AND messages >= 3) OR (tokens >= 3000 AND messages >= 6)
const agent2 = createAgent({
llm: model,
tools: [getWeather],
middleware: [
summarizationMiddleware({
model: new ChatOpenAI({ model: "gpt-4o" }),
trigger: [
{ tokens: 5000, messages: 3 },
{ tokens: 3000, messages: 6 },
],
keep: { messages: 20 },
})
],
});
Summarization middleware that automatically summarizes conversation history when token limits are approached.
This middleware monitors message token counts and automatically summarizes older messages when a threshold is reached, preserving recent messages and maintaining context continuity by ensuring AI/Tool message pairs remain together.