langchain_community.embeddings.infinity.TinyAsyncOpenAIInfinityEmbeddingClient¶

class langchain_community.embeddings.infinity.TinyAsyncOpenAIInfinityEmbeddingClient(host: str = 'http://localhost:7797/v1', aiosession: Optional[ClientSession] = None)[source]¶

A helper tool to embed Infinity. Not part of Langchain’s stable API, direct use discouraged.

Example

mini_client = TinyAsyncInfinityEmbeddingClient(
)
embeds = mini_client.embed(
    model="BAAI/bge-small",
    text=["doc1", "doc2"]
)
# or
embeds = await mini_client.aembed(
    model="BAAI/bge-small",
    text=["doc1", "doc2"]
)

Methods

__init__([host, aiosession])

aembed(model, texts)

call the embedding of model, async method

embed(model, texts)

call the embedding of model

__init__(host: str = 'http://localhost:7797/v1', aiosession: Optional[ClientSession] = None) None[source]¶
async aembed(model: str, texts: List[str]) List[List[float]][source]¶

call the embedding of model, async method

Parameters
  • model (str) – to embedding model

  • texts (List[str]) – List of sentences to embed.

Returns

List of vectors for each sentence

Return type

List[List[float]]

embed(model: str, texts: List[str]) List[List[float]][source]¶

call the embedding of model

Parameters
  • model (str) – to embedding model

  • texts (List[str]) – List of sentences to embed.

Returns

List of vectors for each sentence

Return type

List[List[float]]