LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classiccallbacksstreaming_aiter_final_onlyAsyncFinalIteratorCallbackHandler
    Class●Since v1.0

    AsyncFinalIteratorCallbackHandler

    Copy
    AsyncFinalIteratorCallbackHandler(
      self,
      *,
      answer_prefix_tokens: list[str] | None = None,
      strip_tokens: 

    Bases

    AsyncIteratorCallbackHandler

    Constructors

    Attributes

    Methods

    Inherited fromAsyncIteratorCallbackHandler

    Attributes

    Aqueue: asyncio.Queue[str]Adone: asyncio.EventAalways_verbose: bool
    —

    Always verbose.

    Methods

    View source on GitHub
    bool
    =
    True
    ,
    stream_prefix
    :
    bool
    =
    False
    )
    M
    on_llm_error
    Maiter
    —

    Asynchronous iterator that yields tokens.

    Inherited fromAsyncCallbackHandler(langchain_core)

    Methods

    Mon_chat_model_startMon_llm_errorMon_chain_startMon_chain_endMon_chain_errorMon_tool_startMon_tool_endMon_tool_errorMon_textMon_retryMon_agent_actionMon_agent_finishMon_retriever_startMon_retriever_endMon_retriever_errorMon_custom_event

    Inherited fromBaseCallbackHandler(langchain_core)

    Attributes

    Araise_errorArun_inlineAignore_llmAignore_retryAignore_chainAignore_agentAignore_retrieverAignore_chat_modelAignore_custom_event

    Inherited fromLLMManagerMixin(langchain_core)

    Methods

    Mon_llm_error

    Inherited fromChainManagerMixin(langchain_core)

    Methods

    Mon_chain_endMon_chain_errorMon_agent_actionMon_agent_finish

    Inherited fromToolManagerMixin(langchain_core)

    Methods

    Mon_tool_endMon_tool_error

    Inherited fromRetrieverManagerMixin(langchain_core)

    Methods

    Mon_retriever_errorMon_retriever_end

    Inherited fromCallbackManagerMixin(langchain_core)

    Methods

    Mon_chat_model_startMon_retriever_startMon_chain_startMon_tool_start

    Inherited fromRunManagerMixin(langchain_core)

    Methods

    Mon_textMon_retryMon_custom_event

    Parameters

    NameTypeDescription
    answer_prefix_tokenslist[str] | None
    Default:None

    Token sequence that prefixes the answer. Default is ["Final", "Answer", ":"]

    strip_tokensbool
    Default:True

    Ignore white spaces and new lines when comparing answer_prefix_tokens to last tokens? (to determine if answer has been reached)

    stream_prefixbool
    Default:False
    constructor
    __init__
    NameType
    answer_prefix_tokenslist[str] | None
    strip_tokensbool
    stream_prefixbool
    attribute
    answer_prefix_tokens: DEFAULT_ANSWER_PREFIX_TOKENS
    attribute
    answer_prefix_tokens_stripped: list
    attribute
    last_tokens: list
    attribute
    last_tokens_stripped: list
    attribute
    strip_tokens: strip_tokens
    attribute
    stream_prefix: stream_prefix
    attribute
    answer_reached: bool
    method
    append_to_last_tokens

    Append token to the last tokens.

    method
    check_if_answer_reached

    Check if the answer has been reached.

    method
    on_llm_start
    method
    on_llm_end
    method
    on_llm_new_token

    Callback handler that returns an async iterator.

    Only the final output of the agent will be iterated.

    Should answer prefix itself also be streamed?