langchain_text_splitters.spacy
.SpacyTextSplitter¶
- class langchain_text_splitters.spacy.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', max_length: int = 1000000, **kwargs: Any)[source]¶
Splitting text using Spacy package.
Per default, Spacy’s en_core_web_sm model is used and its default max_length is 1000000 (it is the length of maximum character this model takes which can be increased for large files). For a faster, but potentially less accurate splitting, you can use pipeline=’sentencizer’.
Initialize the spacy text splitter.
Methods
__init__
([separator, pipeline, max_length])Initialize the spacy text splitter.
atransform_documents
(documents, **kwargs)Asynchronously transform a list of documents.
create_documents
(texts[, metadatas])Create documents from a list of texts.
from_huggingface_tokenizer
(tokenizer, **kwargs)Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder
([encoding_name, ...])Text splitter that uses tiktoken encoder to count length.
split_documents
(documents)Split documents.
split_text
(text)Split incoming text and return chunks.
transform_documents
(documents, **kwargs)Transform sequence of documents by splitting them.
- Parameters
separator (str) –
pipeline (str) –
max_length (int) –
kwargs (Any) –
- __init__(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', max_length: int = 1000000, **kwargs: Any) None [source]¶
Initialize the spacy text splitter.
- Parameters
separator (str) –
pipeline (str) –
max_length (int) –
kwargs (Any) –
- Return type
None
- async atransform_documents(documents: Sequence[Document], **kwargs: Any) Sequence[Document] ¶
Asynchronously transform a list of documents.
- create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) List[Document] ¶
Create documents from a list of texts.
- Parameters
texts (List[str]) –
metadatas (Optional[List[dict]]) –
- Return type
List[Document]
- classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) TextSplitter ¶
Text splitter that uses HuggingFace tokenizer to count length.
- Parameters
tokenizer (Any) –
kwargs (Any) –
- Return type
- classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) TS ¶
Text splitter that uses tiktoken encoder to count length.
- Parameters
encoding_name (str) –
model_name (Optional[str]) –
allowed_special (Union[Literal['all'], ~typing.AbstractSet[str]]) –
disallowed_special (Union[Literal['all'], ~typing.Collection[str]]) –
kwargs (Any) –
- Return type
TS