LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classiccallbacksstreaming_stdout_final_onlyFinalStreamingStdOutCallbackHandler
    Class●Since v1.0

    FinalStreamingStdOutCallbackHandler

    Copy
    FinalStreamingStdOutCallbackHandler(
      self,
      *,
      answer_prefix_tokens: list[str] | None = None,
      strip_tokens: 

    Bases

    StreamingStdOutCallbackHandler

    Constructors

    Attributes

    Methods

    Inherited fromStreamingStdOutCallbackHandler(langchain_core)

    Methods

    Mon_chat_model_startMon_llm_endMon_llm_errorMon_chain_start
    View source on GitHub
    bool
    =
    True
    ,
    stream_prefix
    :
    bool
    =
    False
    )
    Mon_chain_end
    Mon_chain_error
    Mon_tool_start
    Mon_agent_action
    Mon_tool_end
    Mon_tool_error
    Mon_text
    Mon_agent_finish

    Inherited fromBaseCallbackHandler(langchain_core)

    Attributes

    Araise_errorArun_inlineAignore_llmAignore_retryAignore_chainAignore_agentAignore_retrieverAignore_chat_modelAignore_custom_event

    Inherited fromLLMManagerMixin(langchain_core)

    Methods

    Mon_llm_endMon_llm_error

    Inherited fromChainManagerMixin(langchain_core)

    Methods

    Mon_chain_endMon_chain_errorMon_agent_actionMon_agent_finish

    Inherited fromToolManagerMixin(langchain_core)

    Methods

    Mon_tool_endMon_tool_error

    Inherited fromRetrieverManagerMixin(langchain_core)

    Methods

    Mon_retriever_errorMon_retriever_end

    Inherited fromCallbackManagerMixin(langchain_core)

    Methods

    Mon_chat_model_startMon_retriever_startMon_chain_startMon_tool_start

    Inherited fromRunManagerMixin(langchain_core)

    Methods

    Mon_textMon_retryMon_custom_event

    Parameters

    NameTypeDescription
    answer_prefix_tokenslist[str] | None
    Default:None

    Token sequence that prefixes the answer. Default is ["Final", "Answer", ":"]

    strip_tokensbool
    Default:True

    Ignore white spaces and new lines when comparing answer_prefix_tokens to last tokens? (to determine if answer has been reached)

    stream_prefixbool
    Default:False
    constructor
    __init__
    NameType
    answer_prefix_tokenslist[str] | None
    strip_tokensbool
    stream_prefixbool
    attribute
    answer_prefix_tokens: DEFAULT_ANSWER_PREFIX_TOKENS
    attribute
    answer_prefix_tokens_stripped: list
    attribute
    last_tokens: list
    attribute
    last_tokens_stripped: list
    attribute
    strip_tokens: strip_tokens
    attribute
    stream_prefix: stream_prefix
    attribute
    answer_reached: bool
    method
    append_to_last_tokens

    Append token to the last tokens.

    method
    check_if_answer_reached

    Check if the answer has been reached.

    method
    on_llm_start

    Run when LLM starts running.

    method
    on_llm_new_token

    Run on new LLM token. Only available when streaming is enabled.

    Callback handler for streaming in agents.

    Only works with agents using LLMs that support streaming.

    Only the final output of the agent will be streamed.

    Should answer prefix itself also be streamed?