Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
generate_prompt(
self,
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any = {}
) -> LLMResult| Name | Type | Description |
|---|---|---|
prompts* | list[PromptValue] | List of A |
stop | list[str] | None | Default: NoneStop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks | Callbacks | Default: None
Used for executing additional functionality, such as logging or streaming, throughout generation. |
**kwargs | Any | Default: {}Arbitrary additional keyword arguments. These are usually passed to the model provider API call. |