Run the LLM on the given prompts and input, handling caching.
generate(
prompts: string[],
options: string[] | Partial<MistralAICallOptions>,
callbacks: Callbacks
): Promise<LLMResult>| Name | Type | Description |
|---|---|---|
prompts* | string[] | |
options | string[] | Partial<MistralAICallOptions> | |
callbacks | Callbacks |