langchain_community.document_loaders.browserless.BrowserlessLoader¶

class langchain_community.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶

Load webpages with Browserless /content endpoint.

Initialize with API token and the URLs to scrape

Attributes

api_token

Browserless API token.

urls

List of URLs to scrape.

Methods

__init__(api_token, urls[, text_content])

Initialize with API token and the URLs to scrape

alazy_load()

A lazy loader for Documents.

lazy_load()

Lazy load Documents from URLs.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

Parameters
  • api_token (str) –

  • urls (Union[str, List[str]]) –

  • text_content (bool) –

__init__(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶

Initialize with API token and the URLs to scrape

Parameters
  • api_token (str) –

  • urls (Union[str, List[str]]) –

  • text_content (bool) –

async alazy_load() AsyncIterator[Document]¶

A lazy loader for Documents.

Return type

AsyncIterator[Document]

lazy_load() Iterator[Document][source]¶

Lazy load Documents from URLs.

Return type

Iterator[Document]

load() List[Document]¶

Load data into Document objects.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]¶

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

Examples using BrowserlessLoader¶