langchain_community.document_loaders.xorbits
.XorbitsLoader¶
- class langchain_community.document_loaders.xorbits.XorbitsLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Xorbits DataFrame.
Initialize with dataframe object.
- Requirements:
Must have xorbits installed. You can install with pip install xorbits.
- Parameters
data_frame (Any) – Xorbits DataFrame object.
page_content_column (str) – Name of the column containing the page content. Defaults to “text”.
Methods
__init__
(data_frame[, page_content_column])Initialize with dataframe object.
A lazy loader for Documents.
Lazy load records from dataframe.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
- __init__(data_frame: Any, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
- Requirements:
Must have xorbits installed. You can install with pip install xorbits.
- Parameters
data_frame (Any) – Xorbits DataFrame object.
page_content_column (str) – Name of the column containing the page content. Defaults to “text”.
- async alazy_load() AsyncIterator[Document] ¶
A lazy loader for Documents.
- Return type
AsyncIterator[Document]
- load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document] ¶
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters
text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns
List of Documents.
- Return type
List[Document]