langchain_community.document_loaders.pyspark_dataframe.PySparkDataFrameLoader

class langchain_community.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]

Load PySpark DataFrames.

Initialize with a Spark DataFrame object.

Parameters
  • spark_session – The SparkSession object.

  • df – The Spark DataFrame object.

  • page_content_column – The name of the column containing the page content. Defaults to “text”.

  • fraction_of_memory – The fraction of memory to use. Defaults to 0.1.

Methods

__init__([spark_session, df, ...])

Initialize with a Spark DataFrame object.

get_num_rows()

Gets the number of "feasible" rows for the DataFrame

lazy_load()

A lazy loader for document content.

load()

Load from the dataframe.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]

Initialize with a Spark DataFrame object.

Parameters
  • spark_session – The SparkSession object.

  • df – The Spark DataFrame object.

  • page_content_column – The name of the column containing the page content. Defaults to “text”.

  • fraction_of_memory – The fraction of memory to use. Defaults to 0.1.

get_num_rows() Tuple[int, int][source]

Gets the number of “feasible” rows for the DataFrame

lazy_load() Iterator[Document][source]

A lazy loader for document content.

load() List[Document][source]

Load from the dataframe.

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]

Load Documents and split into chunks. Chunks are returned as Documents.

Parameters

text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Examples using PySparkDataFrameLoader