Ask a question to get started
Enter to sendā¢Shift+Enter new line
Run the LLM on the given prompts and input, handling caching.
generate( prompts: string[], options: string[] | Partial<CallOptions>, callbacks: Callbacks ): Promise<LLMResult>
prompts
string[]
options
string[] | Partial<CallOptions>
callbacks
Callbacks