Ask a question to get started
Enter to send•Shift+Enter new line
Mixin for LLM callbacks.
LLMManagerMixin()
Run on new output token.
Only available when streaming is enabled.
For both chat models and non-chat models (legacy text completion LLMs).
Run when LLM ends running.
Run when LLM errors.