langchain_core.language_models.llms.get_prompts¶

langchain_core.language_models.llms.get_prompts(params: Dict[str, Any], prompts: List[str]) Tuple[Dict[int, List], str, List[int], List[str]][source]¶

Get prompts that are already cached.