| Name | Type | Description |
|---|---|---|
schema | dict | type[BaseModel] | None | Default: NoneThe output schema. Can be passed in as:
Pydantic class is currently supported. |
method | Literal['function_calling', 'json_mode', 'json_schema'] | Default: 'function_calling'The method for steering model generation, one of:
|
include_raw | bool | Default: False |
kwargs | Any | Default: {} |
Model wrapper that returns outputs formatted to match the given schema.
If False then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If True then both the raw model response (a BaseMessage) and the
parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned as well.
The final output is always a dict with keys 'raw', 'parsed', and
'parsing_error'.
Additional parameters to pass to the underlying LLM's
langchain_core.language_models.chat.BaseChatModel.bind
method, such as response_format or ls_structured_output_format.