langchain_community.cache.CassandraCache¶

class langchain_community.cache.CassandraCache(session: Optional[CassandraSession] = None, keyspace: Optional[str] = None, table_name: str = 'langchain_llm_cache', ttl_seconds: Optional[int] = None, skip_provisioning: bool = False)[source]¶

Cache that uses Cassandra / Astra DB as a backend.

It uses a single Cassandra table. The lookup keys (which get to form the primary key) are:

  • prompt, a string

  • llm_string, a deterministic str representation of the model parameters. (needed to prevent collisions same-prompt-different-model collisions)

Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries

(default: None, i.e. forever)

Parameters
  • session (Optional[CassandraSession]) –

  • keyspace (Optional[str]) –

  • table_name (str) –

  • ttl_seconds (optional int) –

  • skip_provisioning (bool) –

Methods

__init__([session, keyspace, table_name, ...])

Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries (default: None, i.e. forever) :type ttl_seconds: optional int.

aclear(**kwargs)

Clear cache that can take additional keyword arguments.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear cache.

delete(prompt, llm_string)

Evict from cache if there's an entry.

delete_through_llm(prompt, llm[, stop])

A wrapper around delete with the LLM being passed.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__(session: Optional[CassandraSession] = None, keyspace: Optional[str] = None, table_name: str = 'langchain_llm_cache', ttl_seconds: Optional[int] = None, skip_provisioning: bool = False)[source]¶

Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries

(default: None, i.e. forever)

Parameters
  • session (Optional[CassandraSession]) –

  • keyspace (Optional[str]) –

  • table_name (str) –

  • ttl_seconds (optional int) –

  • skip_provisioning (bool) –

async aclear(**kwargs: Any) None¶

Clear cache that can take additional keyword arguments.

Parameters

kwargs (Any) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]]¶

Look up based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None¶

Update cache based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None

clear(**kwargs: Any) None[source]¶

Clear cache. This is for all LLMs at once.

Parameters

kwargs (Any) –

Return type

None

delete(prompt: str, llm_string: str) None[source]¶

Evict from cache if there’s an entry.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

None

delete_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) None[source]¶

A wrapper around delete with the LLM being passed. In case the llm(prompt) calls have a stop param, you should pass it here

Parameters
  • prompt (str) –

  • llm (LLM) –

  • stop (Optional[List[str]]) –

Return type

None

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None