Run when LLM generates a new token.
on_llm_new_token(
self,
token: str,
*,
chunk: GenerationChunk | ChatGenerationChunk | None = None,
**kwargs: Any = {}
) -> None| Name | Type | Description |
|---|---|---|
token* | str | The new token. |
chunk | GenerationChunk | ChatGenerationChunk | None | Default: NoneThe chunk. |
**kwargs | Any | Default: {}Additional keyword arguments. |