Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
agenerate(
self,
prompts: list[str],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
*,
tags: list[str] | list[list[str]] | None = None,
metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
run_name: str | list[str] | None = None,
run_id: uuid.UUID | list[uuid.UUID | None] | None = None,
**kwargs: Any = {}
) -> LLMResult| Name | Type | Description |
|---|---|---|
prompts* | list[str] | List of string prompts. |
stop | list[str] | None | Default: NoneStop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks | Callbacks | list[Callbacks] | None | Default: None
Used for executing additional functionality, such as logging or streaming, throughout generation. |
tags | list[str] | list[list[str]] | None | Default: NoneList of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
metadata | dict[str, Any] | list[dict[str, Any]] | None | Default: NoneList of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
run_name | str | list[str] | None | Default: NoneList of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
run_id | uuid.UUID | list[uuid.UUID | None] | None | Default: NoneList of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
**kwargs | Any | Default: {}Arbitrary additional keyword arguments. These are usually passed to the model provider API call. |