Ask a question to get started
Enter to send•Shift+Enter new line
Accumulate token counts for one completed LLM request.
Updates both the session totals and the per-model breakdown.
record_request( self, model_name: str, input_toks: int, output_toks: int ) -> None
model_name
str
The model that served this request (used as the per-model key). Pass an empty string to skip the per-model breakdown for this request.
input_toks
int
Input tokens for this request.
output_toks
Output tokens for this request.