LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScriptlangchainindexClearToolUsesEdit
Classā—Since v1.1

ClearToolUsesEdit

Strategy for clearing tool outputs when token limits are exceeded.

This strategy mirrors Anthropic's clear_tool_uses_20250919 behavior by replacing older tool results with a placeholder text when the conversation grows too large. It preserves the most recent tool results and can exclude specific tools from being cleared.

Copy
class ClearToolUsesEdit

Used in Docs

  • Prebuilt middleware

Example

Copy
import { ClearToolUsesEdit } from "langchain";

const edit = new ClearToolUsesEdit({
  trigger: { tokens: 100000 },  // Start clearing at 100K tokens
  keep: { messages: 3 },        // Keep 3 most recent tool results
  excludeTools: ["important"],   // Never clear "important" tool
  clearToolInputs: false,        // Keep tool call arguments
  placeholder: "[cleared]",      // Replacement text
});

// Multiple trigger conditions
const edit2 = new ClearToolUsesEdit({
  trigger: [
    { tokens: 100000, messages: 50 },
    { tokens: 50000, messages: 100 }
  ],
  keep: { messages: 3 },
});

// Fractional trigger with model profile
const edit3 = new ClearToolUsesEdit({
  trigger: { fraction: 0.8 },  // Trigger at 80% of model's max tokens
  keep: { fraction: 0.3 },     // Keep 30% of model's max tokens
});

Constructors

constructor
constructor

Properties

property
clearAtLeast: number
property
clearToolInputs: boolean
property
excludeTools: Set<string>
property
keep: __type
property
model: BaseLanguageModel
property
placeholder: string
property
trigger: __type | __type[]

Methods

method
apply→ Promise<void>

Apply an edit to the message list, returning the new token count.

This method should:

  1. Check if editing is needed based on tokens parameter
  2. Modify the messages array in-place (if needed)
  3. Return the new token count after modifications
View source on GitHub