langchain_community.vectorstores.meilisearch.Meilisearch¶
- class langchain_community.vectorstores.meilisearch.Meilisearch(embedding: Embeddings, client: Optional[Client] = None, url: Optional[str] = None, api_key: Optional[str] = None, index_name: str = 'langchain-demo', text_key: str = 'text', metadata_key: str = 'metadata')[source]¶
- Meilisearch vector store. - To use this, you need to have meilisearch python package installed, and a running Meilisearch instance. - To learn more about Meilisearch Python, refer to the in-depth Meilisearch Python documentation: https://meilisearch.github.io/meilisearch-python/. - See the following documentation for how to run a Meilisearch instance: https://www.meilisearch.com/docs/learn/getting_started/quick_start. - Example - from langchain_community.vectorstores import Meilisearch from langchain_community.embeddings.openai import OpenAIEmbeddings import meilisearch # api_key is optional; provide it if your meilisearch instance requires it client = meilisearch.Client(url='http://127.0.0.1:7700', api_key='***') embeddings = OpenAIEmbeddings() vectorstore = Meilisearch( embedding=embeddings, client=client, index_name='langchain_demo', text_key='text') - Initialize with Meilisearch client. - Attributes - embeddings- Access the query embedding object if available. - Methods - __init__(embedding[, client, url, api_key, ...])- Initialize with Meilisearch client. - aadd_documents(documents, **kwargs)- Run more documents through the embeddings and add to the vectorstore. - aadd_texts(texts[, metadatas])- Run more texts through the embeddings and add to the vectorstore. - add_documents(documents, **kwargs)- Run more documents through the embeddings and add to the vectorstore. - add_texts(texts[, metadatas, ids])- Run more texts through the embedding and add them to the vector store. - adelete([ids])- Delete by vector ID or other criteria. - afrom_documents(documents, embedding, **kwargs)- Return VectorStore initialized from documents and embeddings. - afrom_texts(texts, embedding[, metadatas])- Return VectorStore initialized from texts and embeddings. - amax_marginal_relevance_search(query[, k, ...])- Return docs selected using the maximal marginal relevance. - Return docs selected using the maximal marginal relevance. - as_retriever(**kwargs)- Return VectorStoreRetriever initialized from this VectorStore. - asearch(query, search_type, **kwargs)- Return docs most similar to query using specified search type. - asimilarity_search(query[, k])- Return docs most similar to query. - asimilarity_search_by_vector(embedding[, k])- Return docs most similar to embedding vector. - Return docs and relevance scores in the range [0, 1], asynchronously. - asimilarity_search_with_score(*args, **kwargs)- Run similarity search with distance asynchronously. - delete([ids])- Delete by vector ID or other criteria. - from_documents(documents, embedding, **kwargs)- Return VectorStore initialized from documents and embeddings. - from_texts(texts, embedding[, metadatas, ...])- Construct Meilisearch wrapper from raw documents. - max_marginal_relevance_search(query[, k, ...])- Return docs selected using the maximal marginal relevance. - Return docs selected using the maximal marginal relevance. - search(query, search_type, **kwargs)- Return docs most similar to query using specified search type. - similarity_search(query[, k, filter])- Return meilisearch documents most similar to the query. - similarity_search_by_vector(embedding[, k, ...])- Return meilisearch documents most similar to embedding vector. - Return meilisearch documents most similar to embedding vector. - Return docs and relevance scores in the range [0, 1]. - similarity_search_with_score(query[, k, filter])- Return meilisearch documents most similar to the query, along with scores. - __init__(embedding: Embeddings, client: Optional[Client] = None, url: Optional[str] = None, api_key: Optional[str] = None, index_name: str = 'langchain-demo', text_key: str = 'text', metadata_key: str = 'metadata')[source]¶
- Initialize with Meilisearch client. 
 - async aadd_documents(documents: List[Document], **kwargs: Any) List[str]¶
- Run more documents through the embeddings and add to the vectorstore. - Parameters
- (List[Document] (documents) – Documents to add to the vectorstore. 
- Returns
- List of IDs of the added texts. 
- Return type
- List[str] 
 
 - async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) List[str]¶
- Run more texts through the embeddings and add to the vectorstore. 
 - add_documents(documents: List[Document], **kwargs: Any) List[str]¶
- Run more documents through the embeddings and add to the vectorstore. - Parameters
- (List[Document] (documents) – Documents to add to the vectorstore. 
- Returns
- List of IDs of the added texts. 
- Return type
- List[str] 
 
 - add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) List[str][source]¶
- Run more texts through the embedding and add them to the vector store. - Parameters
- texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore. 
- metadatas (Optional[List[dict]]) – Optional list of metadata. Defaults to None. 
- Optional[List[str]] (ids) – Optional list of IDs. Defaults to None. 
 
- Returns
- List of IDs of the texts added to the vectorstore. 
- Return type
- List[str] 
 
 - async adelete(ids: Optional[List[str]] = None, **kwargs: Any) Optional[bool]¶
- Delete by vector ID or other criteria. - Parameters
- ids – List of ids to delete. 
- **kwargs – Other keyword arguments that subclasses might use. 
 
- Returns
- True if deletion is successful, False otherwise, None if not implemented. 
- Return type
- Optional[bool] 
 
 - async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) VST¶
- Return VectorStore initialized from documents and embeddings. 
 - async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) VST¶
- Return VectorStore initialized from texts and embeddings. 
 - async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document]¶
- Return docs selected using the maximal marginal relevance. 
 - async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document]¶
- Return docs selected using the maximal marginal relevance. 
 - as_retriever(**kwargs: Any) VectorStoreRetriever¶
- Return VectorStoreRetriever initialized from this VectorStore. - Parameters
- search_type (Optional[str]) – Defines the type of search that the Retriever should perform. Can be “similarity” (default), “mmr”, or “similarity_score_threshold”. 
- search_kwargs (Optional[Dict]) – - Keyword arguments to pass to the search function. Can include things like: - k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold - for similarity_score_threshold - fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; - 1 for minimum diversity and 0 for maximum. (Default: 0.5) - filter: Filter by document metadata 
 
- Returns
- Retriever class for VectorStore. 
- Return type
 - Examples: - # Retrieve more documents with higher diversity # Useful if your dataset has many similar documents docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25} ) # Fetch more documents for the MMR algorithm to consider # But only return the top 5 docsearch.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50} ) # Only retrieve documents that have a relevance score # Above a certain threshold docsearch.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8} ) # Only get the single most similar document from the dataset docsearch.as_retriever(search_kwargs={'k': 1}) # Use a filter to only retrieve documents from a specific paper docsearch.as_retriever( search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}} ) 
 - async asearch(query: str, search_type: str, **kwargs: Any) List[Document]¶
- Return docs most similar to query using specified search type. 
 - async asimilarity_search(query: str, k: int = 4, **kwargs: Any) List[Document]¶
- Return docs most similar to query. 
 - async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) List[Document]¶
- Return docs most similar to embedding vector. 
 - async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) List[Tuple[Document, float]]¶
- Return docs and relevance scores in the range [0, 1], asynchronously. - 0 is dissimilar, 1 is most similar. - Parameters
- query – input text 
- k – Number of Documents to return. Defaults to 4. 
- **kwargs – - kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to - filter the resulting set of retrieved docs 
 
- Returns
- List of Tuples of (doc, similarity_score) 
 
 - async asimilarity_search_with_score(*args: Any, **kwargs: Any) List[Tuple[Document, float]]¶
- Run similarity search with distance asynchronously. 
 - delete(ids: Optional[List[str]] = None, **kwargs: Any) Optional[bool]¶
- Delete by vector ID or other criteria. - Parameters
- ids – List of ids to delete. 
- **kwargs – Other keyword arguments that subclasses might use. 
 
- Returns
- True if deletion is successful, False otherwise, None if not implemented. 
- Return type
- Optional[bool] 
 
 - classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) VST¶
- Return VectorStore initialized from documents and embeddings. 
 - classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Optional[Client] = None, url: Optional[str] = None, api_key: Optional[str] = None, index_name: str = 'langchain-demo', ids: Optional[List[str]] = None, text_key: Optional[str] = 'text', metadata_key: Optional[str] = 'metadata', **kwargs: Any) Meilisearch[source]¶
- Construct Meilisearch wrapper from raw documents. - This is a user-friendly interface that:
- Embeds documents. 
- Adds the documents to a provided Meilisearch index. 
 
 - This is intended to be a quick way to get started. - Example - from langchain_community.vectorstores import Meilisearch from langchain_community.embeddings import OpenAIEmbeddings import meilisearch # The environment should be the one specified next to the API key # in your Meilisearch console client = meilisearch.Client(url='http://127.0.0.1:7700', api_key='***') embeddings = OpenAIEmbeddings() docsearch = Meilisearch.from_texts( client=client, embeddings=embeddings, ) 
 - max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document]¶
- Return docs selected using the maximal marginal relevance. - Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. - Parameters
- query – Text to look up documents similar to. 
- k – Number of Documents to return. Defaults to 4. 
- fetch_k – Number of Documents to fetch to pass to MMR algorithm. 
- lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. 
 
- Returns
- List of Documents selected by maximal marginal relevance. 
 
 - max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document]¶
- Return docs selected using the maximal marginal relevance. - Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. - Parameters
- embedding – Embedding to look up documents similar to. 
- k – Number of Documents to return. Defaults to 4. 
- fetch_k – Number of Documents to fetch to pass to MMR algorithm. 
- lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. 
 
- Returns
- List of Documents selected by maximal marginal relevance. 
 
 - search(query: str, search_type: str, **kwargs: Any) List[Document]¶
- Return docs most similar to query using specified search type. 
 - similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) List[Document][source]¶
- Return meilisearch documents most similar to the query. - Parameters
- query (str) – Query text for which to find similar documents. 
- k (int) – Number of documents to return. Defaults to 4. 
- filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. 
 
- Returns
- List of Documents most similar to the query text and score for each. 
- Return type
- List[Document] 
 
 - similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) List[Document][source]¶
- Return meilisearch documents most similar to embedding vector. - Parameters
- embedding (List[float]) – Embedding to look up similar documents. 
- k (int) – Number of documents to return. Defaults to 4. 
- filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. 
 
- Returns
- List of Documents most similar to the query
- vector and score for each. 
 
- Return type
- List[Document] 
 
 - similarity_search_by_vector_with_scores(embedding: List[float], k: int = 4, filter: Optional[Dict[str, Any]] = None, **kwargs: Any) List[Tuple[Document, float]][source]¶
- Return meilisearch documents most similar to embedding vector. - Parameters
- embedding (List[float]) – Embedding to look up similar documents. 
- k (int) – Number of documents to return. Defaults to 4. 
- filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. 
 
- Returns
- List of Documents most similar to the query
- vector and score for each. 
 
- Return type
- List[Document] 
 
 - similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) List[Tuple[Document, float]]¶
- Return docs and relevance scores in the range [0, 1]. - 0 is dissimilar, 1 is most similar. - Parameters
- query – input text 
- k – Number of Documents to return. Defaults to 4. 
- **kwargs – - kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to - filter the resulting set of retrieved docs 
 
- Returns
- List of Tuples of (doc, similarity_score) 
 
 - similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) List[Tuple[Document, float]][source]¶
- Return meilisearch documents most similar to the query, along with scores. - Parameters
- query (str) – Query text for which to find similar documents. 
- k (int) – Number of documents to return. Defaults to 4. 
- filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. 
 
- Returns
- List of Documents most similar to the query text and score for each. 
- Return type
- List[Document]