LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/coremessagestrimMessages
Functionā—Since v1.0

trimMessages

Trim messages to be below a token count.

Copy
trimMessages(
  options: TrimMessagesFields
): Runnable<BaseMessage<MessageStructure<MessageToolSet>, MessageType>[], BaseMessage<MessageStructure<MessageToolSet>, MessageType>[]>

Used in Docs

  • Memory
  • Short-term memory

Parameters

NameTypeDescription
options*TrimMessagesFields

Trimming options.

Example

Copy
import { trimMessages, AIMessage, BaseMessage, HumanMessage, SystemMessage } from "@langchain/core/messages";

const messages = [
  new SystemMessage("This is a 4 token text. The full message is 10 tokens."),
  new HumanMessage({
    content: "This is a 4 token text. The full message is 10 tokens.",
    id: "first",
  }),
  new AIMessage({
    content: [
      { type: "text", text: "This is the FIRST 4 token block." },
      { type: "text", text: "This is the SECOND 4 token block." },
    ],
    id: "second",
  }),
  new HumanMessage({
    content: "This is a 4 token text. The full message is 10 tokens.",
    id: "third",
  }),
  new AIMessage({
    content: "This is a 4 token text. The full message is 10 tokens.",
    id: "fourth",
  }),
];

function dummyTokenCounter(messages: BaseMessage[]): number {
  // treat each message like it adds 3 default tokens at the beginning
  // of the message and at the end of the message. 3 + 4 + 3 = 10 tokens
  // per message.

  const defaultContentLen = 4;
  const defaultMsgPrefixLen = 3;
  const defaultMsgSuffixLen = 3;

  let count = 0;
  for (const msg of messages) {
    if (typeof msg.content === "string") {
      count += defaultMsgPrefixLen + defaultContentLen + defaultMsgSuffixLen;
    }
    if (Array.isArray(msg.content)) {
      count +=
        defaultMsgPrefixLen +
        msg.content.length * defaultContentLen +
        defaultMsgSuffixLen;
    }
  }
  return count;
}
View source on GitHub