LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coretracersbaseBaseTraceron_chat_model_start
    Method●Since v0.1

    on_chat_model_start

    Start a trace for an LLM run.

    Copy
    on_chat_model_start(
      self,
      serialized: dict[str, Any],
      messages: list[list[BaseMessage]],
      *,
      run_id: UUID,
      tags: list[str] | None = None,
      parent_run_id: UUID | None = None,
      metadata: dict[str, Any] | None = None,
      name: str | None = None,
      **kwargs: Any = {}
    ) -> Run

    Parameters

    NameTypeDescription
    serialized*dict[str, Any]

    The serialized model.

    messages*list[list[BaseMessage]]

    The messages to start the chat with.

    run_id*UUID

    The run ID.

    tagslist[str] | None
    Default:None

    The tags for the run.

    parent_run_idUUID | None
    Default:None

    The parent run ID.

    metadatadict[str, Any] | None
    Default:None

    The metadata for the run.

    namestr | None
    Default:None

    The name of the run.

    **kwargsAny
    Default:{}

    Additional arguments.

    View source on GitHub