LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreoutput_parserspydantic
    Module●Since v0.1

    pydantic

    Output parsers using Pydantic.

    Attributes

    attribute
    PydanticBaseModel: BaseModel
    attribute
    TBaseModel

    Classes

    class
    OutputParserException

    Exception that output parsers should raise to signify a parsing error.

    This exists to differentiate parsing errors from other code or execution errors that also may arise inside the output parser.

    OutputParserException will be available to catch and handle in ways to fix the parsing error, while other errors will be raised.

    class
    JsonOutputParser

    Parse the output of an LLM call to a JSON object.

    Probably the most reliable output parser for getting structured data that does not use function calling.

    When used in streaming mode, it will yield partial JSON objects containing all the keys that have been returned so far.

    In streaming, if diff is set to True, yields JSONPatch operations describing the difference between the previous and the current object.

    class
    Generation

    A single text generation output.

    Generation represents the response from an "old-fashioned" LLM (string-in, string-out) that generates regular text (not chat messages).

    This model is used internally by chat model and will eventually be mapped to a more general LLMResult object, and then projected into an AIMessage object.

    LangChain users working with chat models will usually access information via AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks). Please refer to AIMessage and LLMResult for more information.

    class
    PydanticOutputParser

    Parse an output using a Pydantic model.

    View source on GitHub