Ollama chat models.
Input Flow (LangChain -> Ollama)
_convert_messages_to_ollama_messages():
ollama.Message format_chat_params():
think parameterOutput Flow (Ollama -> LangChain)
Stream dictionary chunks containing:
message: Dict with role, content, tool_calls, thinkingdone: Boolean indicating completiondone_reason: Reason for completion (stop, length, load)_iterate_over_stream())message.contentToolCallsreasoning=True (stored in additional_kwargs)ChatGenerationChunk -> AIMessage)ChatGenerationChunk with AIMessageChunk contentChatResult with complete AIMessageAIMessage.tool_callsAIMessage.additional_kwargs['reasoning_content']Merge authentication headers into client kwargs in-place.
Parse URL and extract userinfo credentials for headers.
Handles URLs of the form: https://user:password@host:port/path
Validate that a model exists in the local Ollama instance.
Ollama chat model integration.
Install langchain-ollama and download any models you want to use from ollama.
ollama pull gpt-oss:20b
pip install -U langchain-ollamaKey init args ā completion params: model: str Name of Ollama model to use. reasoning: bool | None Controls the reasoning/thinking mode for supported models.
- `True`: Enables reasoning mode. The model's reasoning process will be
captured and returned separately in the `additional_kwargs` of the
response message, under `reasoning_content`. The main response
content will not include the reasoning tags.
- `False`: Disables reasoning mode. The model will not perform any reasoning,
and the response will not include any reasoning content.
- `None` (Default): The model will use its default reasoning behavior. Note
however, if the model's default behavior *is* to perform reasoning, think tags
(`<think>` and `</think>`) will be present within the main response content
unless you set `reasoning` to `True`.
temperature: float
Sampling temperature. Ranges from `0.0` to `1.0`.
num_predict: int | None
Max number of tokens to generate.
See full list of supported init args and their descriptions in the params section.