Pipes an LLM through an output parser, optionally wrapping the result to include the raw LLM response alongside the parsed output.
When includeRaw is true, returns { raw: BaseMessage, parsed: RunOutput }.
If parsing fails, parsed falls back to null.
assembleStructuredOutputPipeline<
RunOutput extends Record<string, any> = Record<string, any>
>(
llm: Runnable<BaseLanguageModelInput>,
outputParser: Runnable<any, RunOutput>,
includeRaw: boolean,
runName: string
): Runnable<BaseLanguageModelInput, RunOutput, RunnableConfig<Record<string, any>>> | Runnable<BaseLanguageModelInput, __type, RunnableConfig<Record<string, any>>>| Name | Type | Description |
|---|---|---|
llm* | Runnable<BaseLanguageModelInput> | |
outputParser* | Runnable<any, RunOutput> | |
includeRaw | boolean | |
runName | string |