langchain_community.document_loaders.gitbook
.GitbookLoader¶
- class langchain_community.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Initialize with web page and whether to load all paths.
- Parameters
web_page (str) – The web page to load or the starting point from where relative paths are discovered.
load_all_paths (bool) – If set to True, all relative paths in the navbar are loaded instead of only web_page.
base_url (Optional[str]) – If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.
content_selector (str) – The CSS selector for the content to load. Defaults to “main”.
continue_on_failure (bool) – whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False
Attributes
web_path
Methods
__init__
(web_page[, load_all_paths, ...])Initialize with web page and whether to load all paths.
A lazy loader for Documents.
aload
()Load text from the urls in web_path async into Documents.
fetch_all
(urls)Fetch all urls concurrently with rate limiting.
Fetch text from one single GitBook page.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
scrape
([parser])Scrape data from webpage and return it in BeautifulSoup format.
scrape_all
(urls[, parser])Fetch all urls, then return soups for all results.
- __init__(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Initialize with web page and whether to load all paths.
- Parameters
web_page (str) – The web page to load or the starting point from where relative paths are discovered.
load_all_paths (bool) – If set to True, all relative paths in the navbar are loaded instead of only web_page.
base_url (Optional[str]) – If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.
content_selector (str) – The CSS selector for the content to load. Defaults to “main”.
continue_on_failure (bool) – whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False
- async alazy_load() AsyncIterator[Document] ¶
A lazy loader for Documents.
- Return type
AsyncIterator[Document]
- aload() List[Document] ¶
Load text from the urls in web_path async into Documents.
- Return type
List[Document]
- async fetch_all(urls: List[str]) Any ¶
Fetch all urls concurrently with rate limiting.
- Parameters
urls (List[str]) –
- Return type
Any
- lazy_load() Iterator[Document] [source]¶
Fetch text from one single GitBook page.
- Return type
Iterator[Document]
- load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document] ¶
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters
text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns
List of Documents.
- Return type
List[Document]
- scrape(parser: Optional[str] = None) Any ¶
Scrape data from webpage and return it in BeautifulSoup format.
- Parameters
parser (Optional[str]) –
- Return type
Any
- scrape_all(urls: List[str], parser: Optional[str] = None) List[Any] ¶
Fetch all urls, then return soups for all results.
- Parameters
urls (List[str]) –
parser (Optional[str]) –
- Return type
List[Any]