LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexcontextEditingMiddleware
Functionā—Since v1.1

contextEditingMiddleware

Middleware that automatically prunes tool results to manage context size.

This middleware applies a sequence of edits when the total input token count exceeds configured thresholds. By default, it uses the ClearToolUsesEdit strategy which mirrors Anthropic's clear_tool_uses_20250919 behaviour by clearing older tool results once the conversation exceeds 100,000 tokens.

Basic Usage

Use the middleware with default settings to automatically manage context:

Copy
contextEditingMiddleware(
  config: ContextEditingMiddlewareConfig = {}
): AgentMiddleware<undefined, undefined, unknown, readonly ClientTool | ServerTool[]>

Used in Docs

  • Prebuilt middleware

Parameters

NameTypeDescription
configContextEditingMiddlewareConfig
Default:{}

Configuration options for the middleware

Example 1

Copy
import { contextEditingMiddleware } from "langchain";
import { createAgent } from "langchain";

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [searchTool, calculatorTool],
  middleware: [
    contextEditingMiddleware(),
  ],
});

Example 2

Copy
import { contextEditingMiddleware, ClearToolUsesEdit } from "langchain";

// Single condition: trigger if tokens >= 50000 AND messages >= 20
const agent1 = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [searchTool, calculatorTool],
  middleware: [
    contextEditingMiddleware({
      edits: [
        new ClearToolUsesEdit({
          trigger: { tokens: 50000, messages: 20 },
          keep: { messages: 5 },
          excludeTools: ["search"],
          clearToolInputs: true,
        }),
      ],
      tokenCountMethod: "approx",
    }),
  ],
});

// Multiple conditions: trigger if (tokens >= 50000 AND messages >= 20) OR (tokens >= 30000 AND messages >= 50)
const agent2 = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [searchTool, calculatorTool],
  middleware: [
    contextEditingMiddleware({
      edits: [
        new ClearToolUsesEdit({
          trigger: [
            { tokens: 50000, messages: 20 },
            { tokens: 30000, messages: 50 },
          ],
          keep: { messages: 5 },
        }),
      ],
    }),
  ],
});

// Fractional trigger with model profile
const agent3 = createAgent({
  model: chatModel,
  tools: [searchTool, calculatorTool],
  middleware: [
    contextEditingMiddleware({
      edits: [
        new ClearToolUsesEdit({
          trigger: { fraction: 0.8 },  // Trigger at 80% of model's max tokens
          keep: { fraction: 0.3 },     // Keep 30% of model's max tokens
          model: chatModel,
        }),
      ],
    }),
  ],
});

Example 3

Copy
import { contextEditingMiddleware, type ContextEdit, type TokenCounter } from "langchain";
import type { BaseMessage } from "@langchain/core/messages";

class CustomEdit implements ContextEdit {
  async apply(params: {
    tokens: number;
    messages: BaseMessage[];
    countTokens: TokenCounter;
  }): Promise<number> {
    // Implement your custom editing logic here
    // and apply it to the messages array, then
    // return the new token count after edits
    return countTokens(messages);
  }
}
View source on GitHub