langchain_community.document_loaders.news.NewsURLLoader¶

class langchain_community.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶

Load news articles from URLs using Unstructured.

Parameters
  • urls (List[str]) – URLs to load. Each is loaded into its own document.

  • text_mode (bool) – If True, extract text from URL and use that for page content. Otherwise, extract raw HTML.

  • nlp (bool) – If True, perform NLP on the extracted contents, like providing a summary and extracting keywords.

  • continue_on_failure (bool) – If True, continue loading documents even if loading fails for a particular URL.

  • show_progress_bar (bool) – If True, use tqdm to show a loading progress bar. Requires tqdm to be installed, pip install tqdm.

  • **newspaper_kwargs (Any) – Any additional named arguments to pass to newspaper.Article().

Example

from langchain_community.document_loaders import NewsURLLoader

loader = NewsURLLoader(
    urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
Newspaper reference:

https://newspaper.readthedocs.io/en/latest/

Initialize with file path.

Methods

__init__(urls[, text_mode, nlp, ...])

Initialize with file path.

alazy_load()

A lazy loader for Documents.

lazy_load()

A lazy loader for Documents.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any) None[source]¶

Initialize with file path.

Parameters
  • urls (List[str]) –

  • text_mode (bool) –

  • nlp (bool) –

  • continue_on_failure (bool) –

  • show_progress_bar (bool) –

  • newspaper_kwargs (Any) –

Return type

None

async alazy_load() AsyncIterator[Document]¶

A lazy loader for Documents.

Return type

AsyncIterator[Document]

lazy_load() Iterator[Document][source]¶

A lazy loader for Documents.

Return type

Iterator[Document]

load() List[Document][source]¶

Load data into Document objects.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]¶

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

Examples using NewsURLLoader¶