ChatOllama()Controls the reasoning/thinking mode for supported models.
True: Enables reasoning mode. The model's reasoning process will be
captured and returned separately in the additional_kwargs of the
response message, under reasoning_content. The main response
content will not include the reasoning tags.False: Disables reasoning mode. The model will not perform any reasoning,
and the response will not include any reasoning content.None (Default): The model will use its default reasoning behavior. Note
however, if the model's default behavior is to perform reasoning, think tags
(<think> and </think>) will be present within the main response content
unless you set reasoning to True.str: e.g. 'low', 'medium', 'high'. Enables reasoning with a custom
intensity level. Currently, this is only supported gpt-oss. See the
Ollama docs
for more information.Additional kwargs to merge with client_kwargs before passing to httpx client.
These are clients unique to the async client; for shared args use client_kwargs.
For a full list of the params, see the httpx documentation.
Additional kwargs to merge with client_kwargs before passing to httpx client.
These are clients unique to the sync client; for shared args use client_kwargs.
For a full list of the params, see the httpx documentation.
Ollama chat model integration.
Install langchain-ollama and download any models you want to use from ollama.
ollama pull gpt-oss:20b
pip install -U langchain-ollamaKey init args — completion params: model: str Name of Ollama model to use. reasoning: bool | None Controls the reasoning/thinking mode for supported models.
- `True`: Enables reasoning mode. The model's reasoning process will be
captured and returned separately in the `additional_kwargs` of the
response message, under `reasoning_content`. The main response
content will not include the reasoning tags.
- `False`: Disables reasoning mode. The model will not perform any reasoning,
and the response will not include any reasoning content.
- `None` (Default): The model will use its default reasoning behavior. Note
however, if the model's default behavior *is* to perform reasoning, think tags
(`<think>` and `</think>`) will be present within the main response content
unless you set `reasoning` to `True`.
temperature: float
Sampling temperature. Ranges from `0.0` to `1.0`.
num_predict: int | None
Max number of tokens to generate.
See full list of supported init args and their descriptions in the params section.
Instantiate:
from langchain_ollama import ChatOllama
model = ChatOllama(
model="gpt-oss:20b",
validate_model_on_init=True,
temperature=0.8,
num_predict=256,
# other params ...
)
Invoke:
messages = [
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
model.invoke(messages)
AIMessage(content='J'adore le programmation. (Note: "programming" can also refer to the act of writing code, so if you meant that, I could translate it as "J'adore programmer". But since you didn\'t specify, I assumed you were talking about the activity itself, which is what "le programmation" usually refers to.)', response_metadata={'model': 'llama3', 'created_at': '2024-07-04T03:37:50.182604Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 3576619666, 'load_duration': 788524916, 'prompt_eval_count': 32, 'prompt_eval_duration': 128125000, 'eval_count': 71, 'eval_duration': 2656556000}, id='run-ba48f958-6402-41a5-b461-5e250a4ebd36-0')
Stream:
for chunk in model.stream("Return the words Hello World!"):
print(chunk.text, end="")
content='Hello' id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
content=' World' id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
content='!' id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
content='' response_metadata={'model': 'llama3', 'created_at': '2024-07-04T03:39:42.274449Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 411875125, 'load_duration': 1898166, 'prompt_eval_count': 14, 'prompt_eval_duration': 297320000, 'eval_count': 4, 'eval_duration': 111099000} id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
AIMessageChunk(
content='Je adore le programmation.(Note: "programmation" is the formal way to say "programming" in French, but informally, people might use the phrase "le développement logiciel" or simply "le code")',
response_metadata={
"model": "llama3",
"created_at": "2024-07-04T03:38:54.933154Z",
"message": {"role": "assistant", "content": ""},
"done_reason": "stop",
"done": True,
"total_duration": 1977300042,
"load_duration": 1345709,
"prompt_eval_duration": 159343000,
"eval_count": 47,
"eval_duration": 1815123000,
},
id="run-3c81a3ed-3e79-4dd3-a796-04064d804890",
)
Async:
await model.ainvoke("Hello how are you!")
AIMessage(
content="Hi there! I'm just an AI, so I don't have feelings or emotions like humans do. But I'm functioning properly and ready to help with any questions or tasks you may have! How can I assist you today?",
response_metadata={
"model": "llama3",
"created_at": "2024-07-04T03:52:08.165478Z",
"message": {"role": "assistant", "content": ""},
"done_reason": "stop",
"done": True,
"total_duration": 2138492875,
"load_duration": 1364000,
"prompt_eval_count": 10,
"prompt_eval_duration": 297081000,
"eval_count": 47,
"eval_duration": 1838524000,
},
id="run-29c510ae-49a4-4cdd-8f23-b972bfab1c49-0",
)
async for chunk in model.astream("Say hello world!"):
print(chunk.content)
HEL
LO
WORLD
!
messages = [("human", "Say hello world!"), ("human", "Say goodbye world!")]
await model.abatch(messages)
[
AIMessage(
content="HELLO, WORLD!",
response_metadata={
"model": "llama3",
"created_at": "2024-07-04T03:55:07.315396Z",
"message": {"role": "assistant", "content": ""},
"done_reason": "stop",
"done": True,
"total_duration": 1696745458,
"load_duration": 1505000,
"prompt_eval_count": 8,
"prompt_eval_duration": 111627000,
"eval_count": 6,
"eval_duration": 185181000,
},
id="run-da6c7562-e25a-4a44-987a-2c83cd8c2686-0",
),
AIMessage(
content="It's been a blast chatting with you! Say goodbye to the world for me, and don't forget to come back and visit us again soon!",
response_metadata={
"model": "llama3",
"created_at": "2024-07-04T03:55:07.018076Z",
"message": {"role": "assistant", "content": ""},
"done_reason": "stop",
"done": True,
"total_duration": 1399391083,
"load_duration": 1187417,
"prompt_eval_count": 20,
"prompt_eval_duration": 230349000,
"eval_count": 31,
"eval_duration": 1166047000,
},
id="run-96cad530-6f3e-4cf9-86b4-e0f8abba4cdb-0",
),
]
JSON mode:
json_model = ChatOllama(format="json")
json_model.invoke(
"Return a query for the weather in a random location and time of day with two keys: location and time_of_day. "
"Respond using JSON only."
).content
'{"location": "Pune, India", "time_of_day": "morning"}'
Tool Calling:
from langchain_ollama import ChatOllama
from pydantic import BaseModel, Field
class Multiply(BaseModel):
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
ans = await chat.invoke("What is 45*67")
ans.tool_calls
[
{
"name": "Multiply",
"args": {"a": 45, "b": 67},
"id": "420c3f3b-df10-4188-945f-eb3abdb40622",
"type": "tool_call",
}
]
Thinking / Reasoning:
You can enable reasoning mode for models that support it by setting
the reasoning parameter to True in either the constructor or
the invoke/stream methods. This will enable the model to think
through the problem and return the reasoning process separately in the
additional_kwargs of the response message, under reasoning_content.
If reasoning is set to None, the model will use its default reasoning
behavior, and any reasoning content will not be captured under the
reasoning_content key, but will be present within the main response content
as think tags (<think> and </think>).
This feature is only available for models that support reasoning.
from langchain_ollama import ChatOllama
model = ChatOllama(
model="deepseek-r1:8b",
validate_model_on_init=True,
reasoning=True,
)
model.invoke("how many r in the word strawberry?")
# or, on an invocation basis:
model.invoke("how many r in the word strawberry?", reasoning=True)
# or model.stream("how many r in the word strawberry?", reasoning=True)
# If not provided, the invocation will default to the ChatOllama reasoning
# param provided (None by default).
AIMessage(content='The word "strawberry" contains **three \'r\' letters**. Here\'s a breakdown for clarity:\n\n- The spelling of "strawberry" has two parts ... be 3.\n\nTo be thorough, let\'s confirm with an online source or common knowledge.\n\nI can recall that "strawberry" has: s-t-r-a-w-b-e-r-r-y — yes, three r\'s.\n\nPerhaps it\'s misspelled by some, but standard is correct.\n\nSo I think the response should be 3.\n'}, response_metadata={'model': 'deepseek-r1:8b', 'created_at': '2025-07-08T19:33:55.891269Z', 'done': True, 'done_reason': 'stop', 'total_duration': 98232561292, 'load_duration': 28036792, 'prompt_eval_count': 10, 'prompt_eval_duration': 40171834, 'eval_count': 3615, 'eval_duration': 98163832416, 'model_name': 'deepseek-r1:8b'}, id='run--18f8269f-6a35-4a7c-826d-b89d52c753b3-0', usage_metadata={'input_tokens': 10, 'output_tokens': 3615, 'total_tokens': 3625})