langchain_core.language_models.llms
.aget_prompts¶
- async langchain_core.language_models.llms.aget_prompts(params: Dict[str, Any], prompts: List[str]) Tuple[Dict[int, List], str, List[int], List[str]] [source]¶
Get prompts that are already cached. Async version.
- Parameters
params (Dict[str, Any]) –
prompts (List[str]) –
- Return type
Tuple[Dict[int, List], str, List[int], List[str]]