LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreoutput_parserstransformBaseTransformOutputParser
    Class●Since v0.1

    BaseTransformOutputParser

    Base class for an output parser that can handle streaming input.

    Copy
    BaseTransformOutputParser(
      self,
      *args: Any = (),
      **kwargs: Any = {}
    )

    Bases

    BaseOutputParser[T]

    Methods

    method
    transform

    Transform the input into the output format.

    method
    atransform

    Async transform the input into the output format.

    Inherited fromBaseOutputParser

    Attributes

    AInputType: AnyAOutputType: Any

    Methods

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    Mparse_result
    —

    Parse the result of an LLM call to a JSON object.

    Mparse
    —

    Eagerly parse the blob into a Document or list of Document objects.

    Maparse_result
    —

    Parse a list of candidate model Generation objects into a specific format.

    Maparse
    —

    Async parse a single string model output into some structure.

    Mparse_with_prompt
    —

    Parse the output of an LLM call with the input prompt for context.

    Mget_format_instructions
    —

    Return the format instructions for the comma-separated list output.

    Mdict
    —

    Return dictionary representation of output parser.

    Inherited fromBaseLLMOutputParser

    Methods

    Mparse_result
    —

    Parse the result of an LLM call to a JSON object.

    Maparse_result
    —

    Parse a list of candidate model Generation objects into a specific format.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Amodel_config

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    AInputType: AnyAOutputType: AnyAinput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schemaMget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MbatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    MabatchMabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    Mbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub