Run the LLM on the given prompts and input, handling caching.
generate(
prompts: string[],
options: string[] | Partial<OllamaCallOptions>,
callbacks: Callbacks
): Promise<LLMResult>| Name | Type | Description |
|---|---|---|
prompts* | string[] | |
options | string[] | Partial<OllamaCallOptions> | |
callbacks | Callbacks |