Optional
fields: ChatOllamaInputThe host URL of the Ollama server.
Whether or not to check the model exists on the local machine before
invoking it. If set to true
, the model will be pulled if it does not
exist.
Optional
embeddingOptional
f16Optional
formatOptional
frequencyOptional
keepOptional
logitsOptional
lowOptional
mainOptional
mirostatOptional
mirostatOptional
mirostatThe model to invoke. If the model does not exist, it will be pulled.
Optional
numaOptional
numOptional
numOptional
numOptional
numOptional
numOptional
numOptional
penalizeOptional
presenceOptional
repeatOptional
repeatOptional
seedOptional
streamingOptional
temperatureOptional
tfsOptional
topOptional
topOptional
typicalOptional
useOptional
useOptional
vocabOptional
runManager: anyOptional
kwargs: Partial<unknown>Optional
options: unknownDownload a model onto the local machine.
The name of the model to download.
Optional
options: PullModelOptionsOptions for pulling the model.
Optional
config: anyOptional
config: anyOptional
config: anyStatic
lc_
Ollama chat model integration.
Setup: Install
@langchain/ollama
and the Ollama app.Constructor args
Runtime args
Runtime args can be passed as the second argument to any of the base runnable methods
.invoke
..stream
,.batch
, etc. They can also be passed via.withConfig
, or the second arg in.bindTools
, like shown in the examples below:Examples
Instantiate
Invoking
Streaming Chunks
Bind tools
Structured Output
Usage Metadata
Response Metadata