Summarization middleware.
Fraction of model's maximum input tokens.
Absolute number of tokens.
Absolute number of messages.
Initialize a chat model from any supported provider using a unified interface.
Two main use cases:
config. Makes it easy to switch between models/providers without
changing your codeRequires the integration package for the chosen model provider to be installed.
See the model_provider parameter below for specific package names
(e.g., pip install langchain-openai).
Refer to the provider integration's API reference
for supported model parameters to use as **kwargs.
Base middleware class for an agent.
Subclass this and implement any of the defined methods to customize agent behavior between steps in the main agent loop.
State schema for the agent.
Summarizes conversation history when token limits are approached.
This middleware monitors message token counts and automatically summarizes older messages when a threshold is reached, preserving recent messages and maintaining context continuity by ensuring AI/Tool message pairs remain together.
Union type for context size specifications.
Can be either:
ContextFraction: A
fraction of the model's maximum input tokens.ContextTokens: An absolute
number of tokens.ContextMessages: An
absolute number of messages.Depending on use with trigger or keep parameters, this type indicates either
when to trigger summarization or how much context to retain.