# aupdate_cache

> **Function** in `langchain_core`

📖 [View in docs](https://reference.langchain.com/python/langchain-core/language_models/llms/aupdate_cache)

Update the cache and get the LLM output. Async version.

## Signature

```python
aupdate_cache(
    cache: BaseCache | bool | None,
    existing_prompts: dict[int, list],
    llm_string: str,
    missing_prompt_idxs: list[int],
    new_results: LLMResult,
    prompts: list[str],
) -> dict | None
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `cache` | `BaseCache \| bool \| None` | Yes | Cache object. |
| `existing_prompts` | `dict[int, list]` | Yes | Dictionary of existing prompts. |
| `llm_string` | `str` | Yes | LLM string. |
| `missing_prompt_idxs` | `list[int]` | Yes | List of missing prompt indexes. |
| `new_results` | `LLMResult` | Yes | LLMResult object. |
| `prompts` | `list[str]` | Yes | List of prompts. |

## Returns

`dict | None`

LLM output.

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/f0c5a28fa05adcda89aebcb449d897245ab21fa4/libs/core/langchain_core/language_models/llms.py#L259)