langchain_community.document_loaders.fauna.FaunaLoader¶

class langchain_community.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶

Load from FaunaDB.

Parameters
  • query (str) –

  • page_content_field (str) –

  • secret (str) –

  • metadata_fields (Optional[Sequence[str]]) –

query¶

The FQL query string to execute.

Type

str

page_content_field¶

The field that contains the content of each page.

Type

str

secret¶

The secret key for authenticating to FaunaDB.

Type

str

metadata_fields¶

Optional list of field names to include in metadata.

Type

Optional[Sequence[str]]

Methods

__init__(query, page_content_field, secret)

alazy_load()

A lazy loader for Documents.

lazy_load()

A lazy loader for Documents.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
Parameters
  • query (str) –

  • page_content_field (str) –

  • secret (str) –

  • metadata_fields (Optional[Sequence[str]]) –

async alazy_load() AsyncIterator[Document]¶

A lazy loader for Documents.

Return type

AsyncIterator[Document]

lazy_load() Iterator[Document][source]¶

A lazy loader for Documents.

Return type

Iterator[Document]

load() List[Document]¶

Load data into Document objects.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]¶

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

Examples using FaunaLoader¶