Optionalfields: ChatOllamaInputThe host URL of the Ollama server.
Whether or not to check the model exists on the local machine before
invoking it. If set to true, the model will be pulled if it does not
exist.
OptionalembeddingOptionalf16OptionalformatOptionalfrequencyOptionalkeepOptionallogitsOptionallowOptionalmainOptionalmirostatOptionalmirostatOptionalmirostatThe model to invoke. If the model does not exist, it will be pulled.
OptionalnumaOptionalnumOptionalnumOptionalnumOptionalnumOptionalnumOptionalnumOptionalpenalizeOptionalpresenceOptionalrepeatOptionalrepeatOptionalseedOptionalstreamingOptionaltemperatureOptionaltfsOptionalthinkOptionaltopOptionaltopOptionaltypicalOptionaluseOptionaluseOptionalvocabOptionalrunManager: anyOptionalkwargs: Partial<unknown>Optionaloptions: unknownDownload a model onto the local machine.
The name of the model to download.
Optionaloptions: PullModelOptionsOptions for pulling the model.
Optionalconfig: anyOptionalconfig: anyOptionalconfig: anyStaticlc_
Ollama chat model integration.
Setup: Install
@langchain/ollamaand the Ollama app.Constructor args
Runtime args
Runtime args can be passed as the second argument to any of the base runnable methods
.invoke..stream,.batch, etc. They can also be passed via.withConfig, or the second arg in.bindTools, like shown in the examples below:Examples
Instantiate
Invoking
Streaming Chunks
Bind tools
Structured Output
Usage Metadata
Response Metadata