langchain_core.language_models.llms.update_cache¶

langchain_core.language_models.llms.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) Optional[dict][source]¶

Update the cache and get the LLM output.