LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicoutput_parsersretryRetryWithErrorOutputParser
    Class●Since v1.0

    RetryWithErrorOutputParser

    Copy
    RetryWithErrorOutputParser()

    Bases

    BaseOutputParser[T]

    Attributes

    Methods

    Inherited fromBaseOutputParser(langchain_core)

    Attributes

    AInputType

    Methods

    MinvokeMainvokeMparse_result
    View source on GitHub
    M
    aparse_result
    Maparse
    Mdict

    Inherited fromBaseLLMOutputParser(langchain_core)

    Methods

    Mparse_resultMaparse_result

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    AnameAmodel_config

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributesAmodel_config

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    parser: Annotated[BaseOutputParser[T], SkipValidation()]

    The parser to use to parse the output.

    attribute
    retry_chain: Annotated[RunnableSerializable[RetryWithErrorOutputParserRetryChainInput, str] | Any, SkipValidation()]

    The RunnableSerializable to use to retry the completion (Legacy: LLMChain).

    attribute
    max_retries: int

    The maximum number of times to retry the parse.

    attribute
    legacy: bool

    Whether to use the run or arun method of the retry_chain.

    attribute
    OutputType: type[T]
    method
    from_llm

    Create a RetryWithErrorOutputParser from an LLM.

    method
    parse_with_prompt
    method
    aparse_with_prompt

    Parse the output of an LLM call using a wrapped parser.

    method
    parse
    method
    get_format_instructions

    Wrap a parser and try to fix parsing errors.

    Does this by passing the original prompt, the completion, AND the error that was raised to another language model and telling it that the completion did not work, and raised the given error. Differs from RetryOutputParser in that this implementation provides the error that was raised back to the LLM, which in theory should give it more information on how to fix it.

    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool