langchain.js
    Preparing search index...

    Class ClearToolUsesEdit

    Strategy for clearing tool outputs when token limits are exceeded.

    This strategy mirrors Anthropic's clear_tool_uses_20250919 behavior by replacing older tool results with a placeholder text when the conversation grows too large. It preserves the most recent tool results and can exclude specific tools from being cleared.

    import { ClearToolUsesEdit } from "langchain";

    const edit = new ClearToolUsesEdit({
    trigger: { tokens: 100000 }, // Start clearing at 100K tokens
    keep: { messages: 3 }, // Keep 3 most recent tool results
    excludeTools: ["important"], // Never clear "important" tool
    clearToolInputs: false, // Keep tool call arguments
    placeholder: "[cleared]", // Replacement text
    });

    // Multiple trigger conditions
    const edit2 = new ClearToolUsesEdit({
    trigger: [
    { tokens: 100000, messages: 50 },
    { tokens: 50000, messages: 100 }
    ],
    keep: { messages: 3 },
    });

    // Fractional trigger with model profile
    const edit3 = new ClearToolUsesEdit({
    trigger: { fraction: 0.8 }, // Trigger at 80% of model's max tokens
    keep: { fraction: 0.3 }, // Keep 30% of model's max tokens
    });

    Implements

    Index

    Constructors

    Properties

    clearAtLeast: number
    clearToolInputs: boolean
    excludeTools: Set<string>
    keep: { fraction?: number; messages?: number; tokens?: number }

    Type Declaration

    • Optionalfraction?: number

      Fraction of the model's context size to keep

    • Optionalmessages?: number
    • Optionaltokens?: number

      Number of tokens to keep

    model: BaseLanguageModel
    placeholder: string
    trigger:
        | { fraction?: number; messages?: number; tokens?: number }
        | { fraction?: number; messages?: number; tokens?: number }[]

    Type Declaration

    • { fraction?: number; messages?: number; tokens?: number }
      • Optionalfraction?: number

        Fraction of the model's context size to use as the trigger

      • Optionalmessages?: number

        Number of messages to use as the trigger

      • Optionaltokens?: number

        Number of tokens to use as the trigger

    • { fraction?: number; messages?: number; tokens?: number }[]
      • Optionalfraction?: number

        Fraction of the model's context size to use as the trigger

      • Optionalmessages?: number

        Number of messages to use as the trigger

      • Optionaltokens?: number

        Number of tokens to use as the trigger

    Methods

    • Apply an edit to the message list, returning the new token count.

      This method should:

      1. Check if editing is needed based on tokens parameter
      2. Modify the messages array in-place (if needed)
      3. Return the new token count after modifications

      Parameters

      • params: { countTokens: TokenCounter; messages: BaseMessage[]; model: BaseLanguageModel }

        Parameters for the editing operation

      Returns Promise<void>

      The updated token count after applying edits