langchain_community.document_loaders.news
.NewsURLLoader¶
- class langchain_community.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶
Load news articles from URLs using Unstructured.
- Parameters
urls – URLs to load. Each is loaded into its own document.
text_mode – If True, extract text from URL and use that for page content. Otherwise, extract raw HTML.
nlp – If True, perform NLP on the extracted contents, like providing a summary and extracting keywords.
continue_on_failure – If True, continue loading documents even if loading fails for a particular URL.
show_progress_bar – If True, use tqdm to show a loading progress bar. Requires tqdm to be installed,
pip install tqdm
.**newspaper_kwargs – Any additional named arguments to pass to newspaper.Article().
Example
from langchain_community.document_loaders import NewsURLLoader loader = NewsURLLoader( urls=["<url-1>", "<url-2>"], ) docs = loader.load()
- Newspaper reference:
Initialize with file path.
Methods
__init__
(urls[, text_mode, nlp, ...])Initialize with file path.
A lazy loader for Documents.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
- __init__(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any) None [source]¶
Initialize with file path.
- load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document] ¶
Load Documents and split into chunks. Chunks are returned as Documents.
- Parameters
text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns
List of Documents.