Ask a question to get started
Enter to send•Shift+Enter new line
End a trace for an LLM or chat model run.
on_llm_end( self, response: LLMResult, *, run_id: UUID, **kwargs: Any = {} ) -> Run
Note:
This is the end callback for both run types. Chat models start with on_chat_model_start, but there is no on_chat_model_end; completion is routed here for callback API compatibility.
on_chat_model_start
on_chat_model_end
response
LLMResult
The response.
run_id
UUID
The run ID.
**kwargs
Any
{}
Additional arguments.