langchain_community.document_loaders.hugging_face_dataset
.HuggingFaceDatasetLoader¶
- class langchain_community.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Load from Hugging Face Hub datasets.
Initialize the HuggingFaceDatasetLoader.
- Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…). Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
Methods
__init__
(path[, page_content_column, name, ...])Initialize the HuggingFaceDatasetLoader.
Load documents lazily.
load
()Load documents.
load_and_split
([text_splitter])Load Documents and split into chunks.
parse_obj
(page_content)- __init__(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Initialize the HuggingFaceDatasetLoader.
- Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…). Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
- load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document] ¶
Load Documents and split into chunks. Chunks are returned as Documents.
- Parameters
text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns
List of Documents.