langchain_community.cache.AstraDBCache

class langchain_community.cache.AstraDBCache(*, collection_name: str = 'langchain_astradb_cache', token: Optional[str] = None, api_endpoint: Optional[str] = None, astra_db_client: Optional[AstraDB] = None, async_astra_db_client: Optional[AsyncAstraDB] = None, namespace: Optional[str] = None, pre_delete_collection: bool = False, setup_mode: SetupMode = SetupMode.SYNC)[source]

Cache that uses Astra DB as a backend.

It uses a single collection as a kv store The lookup keys, combined in the _id of the documents, are:

  • prompt, a string

  • llm_string, a deterministic str representation of the model parameters. (needed to prevent same-prompt-different-model collisions)

Parameters
  • collection_name (str) – name of the Astra DB collection to create/use.

  • token (Optional[str]) – API token for Astra DB usage.

  • api_endpoint (Optional[str]) – full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com.

  • astra_db_client (Optional[AstraDB]) – alternative to token+api_endpoint, you can pass an already-created ‘astrapy.db.AstraDB’ instance.

  • async_astra_db_client (Optional[AsyncAstraDB]) – alternative to token+api_endpoint, you can pass an already-created ‘astrapy.db.AsyncAstraDB’ instance.

  • namespace (Optional[str]) – namespace (aka keyspace) where the collection is created. Defaults to the database’s “default namespace”.

  • setup_mode (SetupMode) – mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

  • pre_delete_collection (bool) – whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

Methods

__init__(*[, collection_name, token, ...])

Cache that uses Astra DB as a backend.

aclear(**kwargs)

Clear cache that can take additional keyword arguments.

adelete(prompt, llm_string)

Evict from cache if there's an entry.

adelete_through_llm(prompt, llm[, stop])

A wrapper around adelete with the LLM being passed.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear cache that can take additional keyword arguments.

delete(prompt, llm_string)

Evict from cache if there's an entry.

delete_through_llm(prompt, llm[, stop])

A wrapper around delete with the LLM being passed.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__(*, collection_name: str = 'langchain_astradb_cache', token: Optional[str] = None, api_endpoint: Optional[str] = None, astra_db_client: Optional[AstraDB] = None, async_astra_db_client: Optional[AsyncAstraDB] = None, namespace: Optional[str] = None, pre_delete_collection: bool = False, setup_mode: SetupMode = SetupMode.SYNC)[source]

Cache that uses Astra DB as a backend.

It uses a single collection as a kv store The lookup keys, combined in the _id of the documents, are:

  • prompt, a string

  • llm_string, a deterministic str representation of the model parameters. (needed to prevent same-prompt-different-model collisions)

Parameters
  • collection_name (str) – name of the Astra DB collection to create/use.

  • token (Optional[str]) – API token for Astra DB usage.

  • api_endpoint (Optional[str]) – full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com.

  • astra_db_client (Optional[AstraDB]) – alternative to token+api_endpoint, you can pass an already-created ‘astrapy.db.AstraDB’ instance.

  • async_astra_db_client (Optional[AsyncAstraDB]) – alternative to token+api_endpoint, you can pass an already-created ‘astrapy.db.AsyncAstraDB’ instance.

  • namespace (Optional[str]) – namespace (aka keyspace) where the collection is created. Defaults to the database’s “default namespace”.

  • setup_mode (SetupMode) – mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

  • pre_delete_collection (bool) – whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

async aclear(**kwargs: Any) None[source]

Clear cache that can take additional keyword arguments.

Parameters

kwargs (Any) –

Return type

None

async adelete(prompt: str, llm_string: str) None[source]

Evict from cache if there’s an entry.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

None

async adelete_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) None[source]

A wrapper around adelete with the LLM being passed. In case the llm(prompt) calls have a stop param, you should pass it here

Parameters
  • prompt (str) –

  • llm (LLM) –

  • stop (Optional[List[str]]) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]

Look up based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]

Update cache based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None

clear(**kwargs: Any) None[source]

Clear cache that can take additional keyword arguments.

Parameters

kwargs (Any) –

Return type

None

delete(prompt: str, llm_string: str) None[source]

Evict from cache if there’s an entry.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

None

delete_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) None[source]

A wrapper around delete with the LLM being passed. In case the llm(prompt) calls have a stop param, you should pass it here

Parameters
  • prompt (str) –

  • llm (LLM) –

  • stop (Optional[List[str]]) –

Return type

None

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]

Look up based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]

Update cache based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None