LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreexceptionsOutputParserException
    Class●Since v0.1

    OutputParserException

    Exception that output parsers should raise to signify a parsing error.

    This exists to differentiate parsing errors from other code or execution errors that also may arise inside the output parser.

    OutputParserException will be available to catch and handle in ways to fix the parsing error, while other errors will be raised.

    Copy
    OutputParserException(
      self,
      error: Any,
      observation: str | None = None,
      llm_output: str | None = None,
      send_to_llm: bool = False
    )

    Bases

    ValueErrorLangChainException

    Parameters

    NameTypeDescription
    error*Any

    The error that's being re-raised or an error message.

    observationstr | None
    Default:None

    String explanation of error which can be passed to a model to try and remediate the issue.

    llm_outputstr | None
    Default:None

    String model output which is error-ing.

    send_to_llmbool
    Default:False

    Whether to send the observation and llm_output back to an Agent after an OutputParserException has been raised.

    This gives the underlying model driving the agent the context that the previous output was improperly structured, in the hopes that it will update the output to the correct format.

    Constructors

    constructor
    __init__
    NameType
    errorAny
    observationstr | None
    llm_outputstr | None
    send_to_llmbool

    Attributes

    attribute
    observation: observation
    attribute
    llm_output: llm_output
    attribute
    send_to_llm: send_to_llm
    View source on GitHub