LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Graphs
    • Functional API
    • Pregel
    • Checkpointing
    • Storage
    • Caching
    • Types
    • Runtime
    • Config
    • Errors
    • Constants
    • Channels
    • Agents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewGraphsFunctional APIPregelCheckpointingStorageCachingTypesRuntimeConfigErrorsConstantsChannelsAgents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    Pythonlanggraphpregel_messagesStreamMessagesHandlerV2on_llm_new_token
    Method●Since v1.1

    on_llm_new_token

    Copy
    on_llm_new_token(
      self,
      token: str,
      *,
      chunk: ChatGenerationChunk | None = None,
      
    View source on GitHub
    run_id
    :
    UUID
    ,
    parent_run_id
    :
    UUID
    |
    None
    =
    None
    ,
    tags
    :
    list
    [
    str
    ]
    |
    None
    =
    None
    ,
    **
    kwargs
    :
    Any
    =
    {
    }
    )
    ->
    Any

    Intentional no-op — v1 chunks are not used on v2-flagged runs.

    The v2 marker already steers invoke to the event generator, so on_llm_new_token should not fire under normal routing. This override stays a pass-through (no call to super()) to make the intent explicit and to guard against any caller (e.g. a node that calls model.stream() directly, which still fires the v1 callback) leaking AIMessageChunks onto a v2-flagged messages stream.