langchain_core.language_models.llms
.aupdate_cacheΒΆ
- async langchain_core.language_models.llms.aupdate_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) Optional[dict] [source]ΒΆ
Update the cache and get the LLM output. Async version
- Parameters
existing_prompts (Dict[int, List]) β
llm_string (str) β
missing_prompt_idxs (List[int]) β
new_results (LLMResult) β
prompts (List[str]) β
- Return type
Optional[dict]