Ask a question to get started
Enter to sendā¢Shift+Enter new line
Run when LLM ends running
If the include_output_tokens is set to True, number of tokens in LLM completion are counted for rate limiting
include_output_tokens
on_llm_end( self, response: LLMResult, **kwargs: Any = {}, ) -> None