Called when an LLM/ChatModel in streaming mode produces a new token
handleLLMNewToken(
token: string,
idx: NewTokenIndices,
runId: string,
parentRunId: string,
tags: string[],
fields: HandleLLMNewTokenCallbackFields
): any| Name | Type | Description |
|---|---|---|
token* | string | |
idx* | NewTokenIndices | idx.prompt is the index of the prompt that produced the token (if there are multiple prompts) idx.completion is the index of the completion that produced the token (if multiple completions per prompt are requested) |
runId* | string | |
parentRunId | string | |
tags | string[] | |
fields | HandleLLMNewTokenCallbackFields |