Ask a question to get started
Enter to send•Shift+Enter new line
Update the cache and get the LLM output.
update_cache( cache: BaseCache | bool | None, existing_prompts: dict[int, list], llm_string: str, missing_prompt_idxs: list[int], new_results: LLMResult, prompts: list[str] ) -> dict | None
cache
BaseCache | bool | None
Cache object.
existing_prompts
dict[int, list]
Dictionary of existing prompts.
llm_string
str
LLM string.
missing_prompt_idxs
list[int]
List of missing prompt indexes.
new_results
LLMResult
LLMResult object.
prompts
list[str]
List of prompts.