Creates a middleware to limit the number of model calls at both thread and run levels.
This middleware helps prevent excessive model API calls by enforcing limits on how many
times the model can be invoked. It supports two types of limits:
Thread-level limit: Restricts the total number of model calls across an entire conversation thread
Run-level limit: Restricts the number of model calls within a single agent run/invocation
How It Works
The middleware intercepts model requests before they are sent and checks the current call counts
against the configured limits. If either limit is exceeded, it throws a ModelCallLimitMiddlewareError
to stop execution and prevent further API calls.
Use Cases
Cost Control: Prevent runaway costs from excessive model calls in production
Testing: Ensure agents don't make too many calls during development/testing
Safety: Limit potential infinite loops or recursive agent behaviors
Rate Limiting: Enforce organizational policies on model usage per conversation
// Limit to 10 calls per thread and 3 calls per run constagent = createAgent({ model:"openai:gpt-4o-mini", tools: [myTool], middleware: [ modelCallLimitMiddleware({ threadLimit:10, runLimit:3 }) ] });
Example
// Limits can also be configured at runtime via context constresult = awaitagent.invoke( { messages: ["Hello"] }, { configurable: { threadLimit:5// Override the default limit for this run } } );
Creates a middleware to limit the number of model calls at both thread and run levels.
This middleware helps prevent excessive model API calls by enforcing limits on how many times the model can be invoked. It supports two types of limits:
How It Works
The middleware intercepts model requests before they are sent and checks the current call counts against the configured limits. If either limit is exceeded, it throws a
ModelCallLimitMiddlewareErrorto stop execution and prevent further API calls.Use Cases