langchain_community.document_loaders.hn
.HNLoader¶
- class langchain_community.document_loaders.hn.HNLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load Hacker News data.
It loads data from either main page results or the comments page.
Initialize loader.
- Parameters
web_paths (Sequence[str]) – Web paths to load from.
requests_per_second (int) – Max number of concurrent requests to make.
default_parser (str) – Default parser to use for BeautifulSoup.
requests_kwargs (Optional[Dict[str, Any]]) – kwargs for requests
raise_for_status (bool) – Raise an exception if http status code denotes an error.
bs_get_text_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 get_text
bs_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 web page parsing
web_path (Union[str, Sequence[str]]) –
header_template (Optional[dict]) –
verify_ssl (bool) –
proxies (Optional[dict]) –
continue_on_failure (bool) –
autoset_encoding (bool) –
encoding (Optional[str]) –
session (Any) –
Attributes
web_path
Methods
__init__
([web_path, header_template, ...])Initialize loader.
A lazy loader for Documents.
aload
()Load text from the urls in web_path async into Documents.
fetch_all
(urls)Fetch all urls concurrently with rate limiting.
Lazy load text from the url(s) in web_path.
load
()Get important HN webpage information.
load_and_split
([text_splitter])Load Documents and split into chunks.
load_comments
(soup_info)Load comments from a HN post.
load_results
(soup)Load items from an HN page.
scrape
([parser])Scrape data from webpage and return it in BeautifulSoup format.
scrape_all
(urls[, parser])Fetch all urls, then return soups for all results.
- __init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) None ¶
Initialize loader.
- Parameters
web_paths (Sequence[str]) – Web paths to load from.
requests_per_second (int) – Max number of concurrent requests to make.
default_parser (str) – Default parser to use for BeautifulSoup.
requests_kwargs (Optional[Dict[str, Any]]) – kwargs for requests
raise_for_status (bool) – Raise an exception if http status code denotes an error.
bs_get_text_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 get_text
bs_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 web page parsing
web_path (Union[str, Sequence[str]]) –
header_template (Optional[dict]) –
verify_ssl (bool) –
proxies (Optional[dict]) –
continue_on_failure (bool) –
autoset_encoding (bool) –
encoding (Optional[str]) –
session (Any) –
- Return type
None
- async alazy_load() AsyncIterator[Document] ¶
A lazy loader for Documents.
- Return type
AsyncIterator[Document]
- aload() List[Document] ¶
Load text from the urls in web_path async into Documents.
- Return type
List[Document]
- async fetch_all(urls: List[str]) Any ¶
Fetch all urls concurrently with rate limiting.
- Parameters
urls (List[str]) –
- Return type
Any
- lazy_load() Iterator[Document] ¶
Lazy load text from the url(s) in web_path.
- Return type
Iterator[Document]
- load() List[Document] [source]¶
Get important HN webpage information.
- HN webpage components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
- Return type
List[Document]
- load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document] ¶
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters
text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns
List of Documents.
- Return type
List[Document]
- load_comments(soup_info: Any) List[Document] [source]¶
Load comments from a HN post.
- Parameters
soup_info (Any) –
- Return type
List[Document]
- load_results(soup: Any) List[Document] [source]¶
Load items from an HN page.
- Parameters
soup (Any) –
- Return type
List[Document]
- scrape(parser: Optional[str] = None) Any ¶
Scrape data from webpage and return it in BeautifulSoup format.
- Parameters
parser (Optional[str]) –
- Return type
Any
- scrape_all(urls: List[str], parser: Optional[str] = None) List[Any] ¶
Fetch all urls, then return soups for all results.
- Parameters
urls (List[str]) –
parser (Optional[str]) –
- Return type
List[Any]