LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Structured Output
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Standard Schema
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileStructured OutputLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStandard SchemaStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/corelanguage_modelsstructured_output
Moduleā—Since v1.1

language_models/structured_output

Copy
import { ... } from "@langchain/core/language_models/structured_output";

Functions

function
assembleStructuredOutputPipeline→ Runnable<BaseLanguageModelInput, RunOutput, RunnableConfig<Record<string, any>>> | Runnable<BaseLanguageModelInput, __type, RunnableConfig<Record<string, any>>>

Pipes an LLM through an output parser, optionally wrapping the result to include the raw LLM response alongside the parsed output.

When includeRaw is true, returns { raw: BaseMessage, parsed: RunOutput }. If parsing fails, parsed falls back to null.

function
createContentParser→ BaseOutputParser<RunOutput>

Creates the appropriate content-based output parser for a schema. Use this for jsonMode/jsonSchema methods where the LLM returns JSON text.

  • Zod schema -> StructuredOutputParser (Zod validation)
  • Standard schema -> StandardSchemaOutputParser (standard schema validation)
  • Plain JSON schema -> JsonOutputParser (no validation)
function
createFunctionCallingParser→ BaseLLMOutputParser<RunOutput>

Creates the appropriate tool-calling output parser for a schema. Use this for function calling / tool use methods where the LLM returns structured tool calls.

  • Zod schema -> parser with Zod validation
  • Standard schema -> parser with standard schema validation
  • Plain JSON schema -> parser with no validation
View source on GitHub