langchain_community.document_loaders.docugami.DocugamiLoader¶

class langchain_community.document_loaders.docugami.DocugamiLoader[source]¶

Bases: BaseLoader, BaseModel

Load from Docugami.

To use, you should have the dgml-utils python package installed.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param access_token: Optional[str] = None¶

The Docugami API access token to use.

param api: str = 'https://api.docugami.com/v1preview1'¶

The Docugami API endpoint to use.

param docset_id: Optional[str] = None¶

The Docugami API docset ID to use.

param document_ids: Optional[Sequence[str]] = None¶

The Docugami API document IDs to use.

param file_paths: Optional[Sequence[Union[Path, str]]] = None¶

The local file paths to use.

param include_project_metadata_in_doc_metadata: bool = True¶

Set to True if you want to include the project metadata in the doc metadata.

param include_xml_tags: bool = False¶

Set to true for XML tags in chunk output text.

param max_metadata_length = 512¶

Max length of metadata text returned.

param max_text_length = 4096¶

Max length of chunk text returned.

param min_text_length: int = 32¶

Threshold under which chunks are appended to next to avoid over-chunking.

param parent_hierarchy_levels: int = 0¶

Set appropriately to get parent chunks using the chunk hierarchy.

param parent_id_key: str = 'doc_id'¶

Metadata key for parent doc ID.

param sub_chunk_tables: bool = False¶

Set to True to return sub-chunks within tables.

param whitespace_normalize_text: bool = True¶

Set to False if you want to full whitespace formatting in the original XML doc, including indentation.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model¶

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model¶

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny¶

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode¶

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

lazy_load() Iterator[Document]¶

A lazy loader for Documents.

load() List[Document][source]¶

Load documents.

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]¶

Load Documents and split into chunks. Chunks are returned as Documents.

Parameters

text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model¶
classmethod parse_obj(obj: Any) Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode¶
classmethod update_forward_refs(**localns: Any) None¶

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model¶

Examples using DocugamiLoader¶