langchain_community
0.0.8¶
langchain_community.adapters
¶
Classes¶
Chat completion. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Completion. |
|
Allows a BaseModel to return its fields by string variable indexing |
Functions¶
|
Async version of enumerate function. |
Convert a dictionary to a LangChain message. |
|
Convert a LangChain message to a dictionary. |
|
Convert messages to a list of lists of dictionaries for fine-tuning. |
|
|
Convert dictionaries representing OpenAI messages to LangChain format. |
langchain_community.agent_toolkits
¶
Agent toolkits contain integrations with various resources and services.
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases.
These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
When developing an application, developers should inspect the capabilities and permissions of the tools that underlie the given agent toolkit, and determine whether permissions of the given toolkit are appropriate for the application.
See [Security](https://python.langchain.com/docs/security) for more information.
Classes¶
Toolkit for interacting with AINetwork Blockchain. |
|
Toolkit for interacting with Amadeus which offers APIs for travel. |
|
|
Toolkit for Azure Cognitive Services. |
Base Toolkit representing a collection of related tools. |
|
Clickup Toolkit. |
|
|
Toolkit for interacting with local files. |
Schema for operations that require a branch name as input. |
|
Schema for operations that require a comment as input. |
|
Schema for operations that require a file path and content as input. |
|
Schema for operations that require a PR title and body as input. |
|
Schema for operations that require a username as input. |
|
Schema for operations that require a file path as input. |
|
Schema for operations that require a directory path as input. |
|
Schema for operations that require an issue number as input. |
|
Schema for operations that require a PR number as input. |
|
GitHub Toolkit. |
|
Schema for operations that do not require any input. |
|
Schema for operations that require a file path as input. |
|
Schema for operations that require a search query as input. |
|
Schema for operations that require a search query as input. |
|
Schema for operations that require a file path and content as input. |
|
GitLab Toolkit. |
|
Toolkit for interacting with Gmail. |
|
Jira Toolkit. |
|
Toolkit for interacting with a JSON spec. |
|
Toolkit for interacting with the Browser Agent. |
|
Nasa Toolkit. |
|
Natural Language API Tool. |
|
Natural Language API Toolkit. |
|
Toolkit for interacting with Office 365. |
|
|
A tool that sends a DELETE request and parses the response. |
Requests GET tool with LLM-instructed extraction of truncated responses. |
|
Requests PATCH tool with LLM-instructed extraction of truncated responses. |
|
Requests POST tool with LLM-instructed extraction of truncated responses. |
|
Requests PUT tool with LLM-instructed extraction of truncated responses. |
|
A reduced OpenAPI spec. |
|
Toolkit for interacting with an OpenAPI API. |
|
Toolkit for making REST requests. |
|
Toolkit for PlayWright browser tools. |
|
Toolkit for interacting with Power BI dataset. |
|
Toolkit for interacting with Slack. |
|
Toolkit for interacting with Spark SQL. |
|
Toolkit for interacting with SQL databases. |
|
Steam Toolkit. |
|
Zapier Toolkit. |
Functions¶
Construct a json agent from an LLM and tools. |
|
Construct an OpenAPI agent from an LLM and tools. |
|
Instantiate OpenAI API planner and controller for a given spec. |
|
Simplify/distill/minify a spec somehow. |
|
Construct a Power BI agent from an LLM and tools. |
|
Construct a Power BI agent from a Chat LLM and tools. |
|
Construct a Spark SQL agent from an LLM and tools. |
|
Construct an SQL agent from an LLM and tools. |
langchain_community.cache
¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
|
Cache that uses Astra DB as a backend. |
|
Cache that uses Astra DB as a vector-store backend for semantic (i.e. |
|
Cache that uses Cassandra / Astra DB as a backend. |
|
Cache that uses Cassandra as a vector-store backend for semantic (i.e. |
|
SQLite table for full LLM Cache (all generations). |
|
SQLite table for full LLM Cache (all generations). |
|
Cache that uses GPTCache as a backend. |
Cache that stores things in memory. |
|
|
Cache that uses Momento as a backend. |
|
Cache that uses Redis as a backend. |
|
Cache that uses Redis as a vector-store backend. |
|
Cache that uses SQAlchemy as a backend. |
|
Cache that uses SQAlchemy as a backend. |
|
Cache that uses SQLite as a backend. |
|
Cache that uses Upstash Redis as a backend. |
Functions¶
langchain_community.callbacks
¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
Callback Handler that logs to Aim. |
|
This class handles the metadata and associated function states for callbacks. |
|
Callback Handler that logs into Argilla. |
|
Callback Handler that logs to Arize. |
|
Callback Handler that logs to Arthur platform. |
|
Callback Handler that logs to ClearML. |
|
Callback Handler that logs to Comet. |
|
|
Callback Handler that logs into deepeval. |
Callback Handler that records transcripts to the Context service. |
|
This callback handler that is used within a Flyte task. |
|
Asynchronous callback for manually validating values. |
|
Callback for manually validating values. |
|
Exception to raise when a person manually review and rejects a value. |
|
Callback Handler that logs to Infino. |
|
|
Label Studio callback handler. |
Label Studio mode enumerator. |
|
|
Callback Handler for LLMonitor`. |
Context manager for LLMonitor user context. |
|
Callback Handler that logs metrics and artifacts to mlflow server. |
|
|
Callback Handler that logs metrics and artifacts to mlflow server. |
Callback Handler that tracks OpenAI info. |
|
|
Callback handler for promptlayer. |
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments. |
|
The child record as a NamedTuple. |
|
The enumerator of the child type. |
|
A Streamlit expander that can be renamed and dynamically expanded/collapsed. |
|
|
A thought in the LLM's thought stream. |
|
Generates markdown labels for LLMThought containers. |
|
Enumerator of the LLMThought state. |
|
A callback handler that writes to a Streamlit app. |
|
The tool record as a NamedTuple. |
|
Comet Tracer. |
Handles the conversion of a LangChain Runs into a WBTraceTree. |
|
Arguments for the WandbTracer. |
|
|
Callback Handler that logs to Weights and Biases. |
Callback handler for Trubrics. |
|
This class handles the metadata and associated function states for callbacks. |
|
Callback Handler that logs to Weights and Biases. |
|
Callback Handler for logging to WhyLabs. |
Functions¶
Import the aim python package and raise an error if it is not installed. |
|
Import the clearml python package and raise an error if it is not installed. |
|
Import comet_ml and raise an error if it is not installed. |
|
Import the getcontext package. |
|
Analyze text using textstat and spacy. |
|
Import flytekit and flytekitplugins-deck-standard. |
|
Calculate num tokens for OpenAI with tiktoken package. |
|
Import the infino client. |
|
Import tiktoken for counting tokens for OpenAI models. |
|
|
Get default Label Studio configs for the given mode. |
Builds an LLMonitor UserContextManager |
|
Get the OpenAI callback handler in a context manager. |
|
Get the WandbTracer in a context manager. |
|
Analyze text using textstat and spacy. |
|
|
Construct an html element from a prompt and a generation. |
Import the mlflow python package and raise an error if it is not installed. |
|
Get the cost in USD for a given model and number of tokens. |
|
Standardize the model name to a format that can be used in the OpenAI API. |
|
|
Save dict to local file path. |
Import comet_llm api and raise an error if it is not installed. |
|
|
Flattens a nested dictionary into a flat dictionary. |
Hash a string using sha1. |
|
Import the pandas python package and raise an error if it is not installed. |
|
Import the spacy python package and raise an error if it is not installed. |
|
Import the textstat python package and raise an error if it is not installed. |
|
|
Load json file to a string. |
Analyze text using textstat and spacy. |
|
|
Construct an html element from a prompt and a generation. |
Import the wandb python package and raise an error if it is not installed. |
|
Load json file to a dictionary. |
|
Import the langkit python package and raise an error if it is not installed. |
langchain_community.chat_loaders
¶
Chat Loaders load chat messages from common communications platforms.
Load chat messages from various communications platforms such as Facebook Messenger, Telegram, and WhatsApp. The loaded chat messages can be used for fine-tuning models.
Class hierarchy:
BaseChatLoader --> <name>ChatLoader # Examples: WhatsAppChatLoader, IMessageChatLoader
Main helpers:
ChatSession
Classes¶
Base class for chat loaders. |
|
|
Load Facebook Messenger chat data from a folder. |
|
Load Facebook Messenger chat data from a single file. |
|
Load data from GMail. |
Load chat sessions from the iMessage chat.db SQLite file. |
|
Load chat sessions from a LangSmith dataset with the "chat" data type. |
|
Load chat sessions from a list of LangSmith "llm" runs. |
|
Load Slack conversations from a dump zip file. |
|
Load telegram conversations to LangChain chat messages. |
|
Load WhatsApp conversations from a dump zip file or directory. |
Functions¶
|
|
Convert messages from the specified 'sender' to AI messages. |
|
Convert messages from the specified 'sender' to AI messages. |
|
|
Merge chat runs together. |
Merge chat runs together in a chat session. |
langchain_community.chat_message_histories
¶
Classes¶
|
Chat message history that stores history in Astra DB. |
|
Chat message history that stores history in Cassandra. |
|
Chat message history backed by Azure CosmosDB. |
|
Chat message history that stores history in AWS DynamoDB. |
|
Chat message history that stores history in Elasticsearch. |
Chat message history that stores history in a local file. |
|
|
Chat message history backed by Google Firestore. |
In memory implementation of chat message history. |
|
|
Chat message history cache that uses Momento as a backend. |
|
Chat message history that stores history in MongoDB. |
Chat message history stored in a Neo4j database. |
|
|
Chat message history stored in a Postgres database. |
Chat message history stored in a Redis database. |
|
|
Uses Rockset to store chat messages. |
|
Chat message history stored in a SingleStoreDB database. |
The class responsible for converting BaseMessage to your SQLAlchemy model. |
|
The default message converter for SQLChatMessageHistory. |
|
Chat message history stored in an SQL database. |
|
|
Chat message history that stores messages in Streamlit session state. |
|
Chat message history stored in an Upstash Redis database. |
Chat message history stored in a Xata database. |
|
Chat message history that uses Zep as a backend. |
Functions¶
Create a message model for a given table name. |
langchain_community.chat_models
¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose is a bit different. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
Anthropic chat large language models. |
|
Anyscale Chat large language models. |
|
Azure OpenAI Chat Completion API. |
|
AzureML Chat models API. |
|
Content formatter for LLaMA. |
|
Baichuan chat models API by Baichuan Intelligent Technology. |
|
Baidu Qianfan chat models. |
|
A chat model that uses the Bedrock API. |
|
Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects. |
|
Cohere chat large language models. |
|
Databricks chat models API. |
|
ERNIE-Bot large language model. |
|
EverlyAI Chat large language models. |
|
Fake ChatModel for testing purposes. |
|
Fake ChatModel for testing purposes. |
|
Fireworks Chat models. |
|
GigaChat large language models API. |
|
Google PaLM Chat models API. |
|
Error with the Google PaLM API. |
|
GPTRouter by Writesonic Inc. |
|
Error with the GPTRouter APIs |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Wrapper for using Hugging Face LLM's as ChatModels. |
|
ChatModel which returns user input as the response. |
|
Tencent Hunyuan chat models API by Tencent. |
|
Javelin AI Gateway chat models API. |
|
Parameters for the Javelin AI Gateway LLM. |
|
Jina AI Chat models API. |
|
ChatKonko Chat large language models API. |
|
A chat model that uses the LiteLLM API. |
|
Error with the LiteLLM I/O library |
|
Wrapper around Minimax large language models. |
|
MLflow chat models API. |
|
MLflow AI Gateway chat models API. |
|
Parameters for the MLflow AI Gateway LLM. |
|
Ollama locally runs large language models. |
|
OpenAI Chat large language models API. |
|
Eas LLM Service chat model API. |
|
PromptLayer and OpenAI Chat large language models API. |
|
Alibaba Tongyi Qwen chat models API. |
|
Vertex AI Chat large language models API. |
|
volc engine maas hosts a plethora of models. |
|
Chat with LLMs via llama-api-server |
|
Wrapper around YandexGPT large language models. |
|
ZHIPU AI large language chat models API. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Create a new model by parsing and validating input data from keyword arguments. |
Functions¶
|
Format a list of messages into a full prompt for the Anthropic model |
|
Convert a message to a dictionary that can be passed to the API. |
Get the request for the Cohere chat API. |
|
|
Get the role of the message. |
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call for streaming. |
|
Use tenacity to retry the completion call. |
|
Define conditional decorator. |
|
Convert a dict response to a message. |
|
Use tenacity to retry the async completion call. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
|
Return the body for the model router input. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the async completion call. |
|
Convert a list of messages to a prompt for llama. |
|
Use tenacity to retry the async completion call. |
|
Convert a message to a dict. |
|
Convert a dict to a message. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
langchain_community.docstore
¶
Docstores are classes to store and load Documents.
The Docstore is a simplified version of the Document Loader.
Class hierarchy:
Docstore --> <name> # Examples: InMemoryDocstore, Wikipedia
Main helpers:
Document, AddableMixin
Classes¶
|
Langchain Docstore via arbitrary lookup function. |
Mixin class that supports adding texts. |
|
Interface to access to place that stores documents. |
|
|
Simple in memory docstore in the form of a dict. |
Wrapper around wikipedia API. |
langchain_community.document_loaders
¶
Document Loaders are classes to load Documents.
Document Loaders are usually used to load a lot of Documents in a single run.
Class hierarchy:
BaseLoader --> <name>Loader # Examples: TextLoader, UnstructuredFileLoader
Main helpers:
Document, <name>TextSplitter
Classes¶
|
Load acreom vault from a directory. |
Load with an Airbyte source connector implemented using the CDK. |
|
Load from Gong using an Airbyte source connector. |
|
Load from Hubspot using an Airbyte source connector. |
|
Load from Salesforce using an Airbyte source connector. |
|
Load from Shopify using an Airbyte source connector. |
|
Load from Stripe using an Airbyte source connector. |
|
Load from Typeform using an Airbyte source connector. |
|
Load from Zendesk Support using an Airbyte source connector. |
|
Load local Airbyte json files. |
|
Load the Airtable tables. |
|
Load datasets from Apify web scraping, crawling, and data extraction platform. |
|
Load records from an ArcGIS FeatureLayer. |
|
|
Load a query result from Arxiv. |
|
Loader for AssemblyAI audio transcripts. |
Transcript format to use for the document loader. |
|
Load DataStax Astra DB documents. |
|
Load HTML asynchronously. |
|
Load AZLyrics webpages. |
|
Load from Azure AI Data. |
|
|
Load from Azure Blob Storage container. |
|
Load from Azure Blob Storage files. |
|
Load from Baidu BOS directory. |
|
Load from Baidu Cloud BOS file. |
Abstract interface for blob parsers. |
|
Interface for Document Loader. |
|
Base class for all loaders that uses O365 Package |
|
|
Load a bibtex file. |
Load from the Google Cloud Platform BigQuery. |
|
Load BiliBili video transcripts. |
|
Load a Blackboard course. |
|
|
Load blobs in the local file system. |
Blob represents raw data by either reference or value. |
|
Abstract interface for blob loaders implementation. |
|
|
Load YouTube urls as audio file(s). |
Load elements from a blockchain smart contract. |
|
Enumerator of the supported blockchains. |
|
Load with Brave Search engine. |
|
Load webpages with Browserless /content endpoint. |
|
|
Load conversations from exported ChatGPT data. |
Scrape HTML pages from URLs using a headless instance of the Chromium. |
|
|
Load College Confidential webpages. |
Load and pars Documents concurrently. |
|
Load Confluence pages. |
|
Enumerator of the content formats of Confluence page. |
|
|
Load CoNLL-U files. |
Load documents from Couchbase. |
|
|
Load a CSV file into a list of Documents. |
Load CSV files using Unstructured. |
|
Load Cube semantic layer metadata. |
|
Load Datadog logs. |
|
Initialize with dataframe object. |
|
Load Pandas DataFrame. |
|
Load Diffbot json file. |
|
Load from a directory. |
|
Load Discord chat logs. |
|
|
Loads a PDF with Azure Document Intelligence |
Load from Docugami. |
|
Load from Docusaurus Documentation. |
|
Load files from Dropbox. |
|
Load from DuckDB. |
|
Loads Outlook Message files using extract_msg. |
|
Load email files using Unstructured. |
|
Load EPub files using Unstructured. |
|
Load transactions from Ethereum mainnet. |
|
Load from EverNote. |
|
Load Microsoft Excel files using Unstructured. |
|
Load Facebook Chat messages directory dump. |
|
|
Load from FaunaDB. |
Load Figma file. |
|
Load from GCS directory. |
|
Load from GCS file. |
|
Generic Document Loader. |
|
Load geopandas Dataframe. |
|
|
Load Git repository files. |
|
Load GitBook data. |
Load GitHub repository Issues. |
|
Load issues of a GitHub repository. |
|
|
Loader for Google Cloud Speech-to-Text audio transcripts. |
Load Google Docs from Google Drive. |
|
Load from Gutenberg.org. |
|
File encoding as the NamedTuple. |
|
|
Load Hacker News data. |
Load HTML files using Unstructured. |
|
|
Load HTML files and parse them with beautiful soup. |
|
Load from Hugging Face Hub datasets. |
|
Load iFixit repair guides, device wikis and answers. |
Load PNG and JPG files using Unstructured. |
|
Load image captions. |
|
Load IMSDb webpages. |
|
|
Load from IUGU. |
Load notes from Joplin. |
|
Load a JSON file using a jq schema. |
|
Client for lakeFS. |
|
|
Load from lakeFS. |
Load from lakeFS as unstructured data. |
|
Load from LarkSuite (FeiShu). |
|
Load Markdown files using Unstructured. |
|
Load the Mastodon 'toots'. |
|
Load from Alibaba Cloud MaxCompute table. |
|
Load MediaWiki dump from an XML file. |
|
Merge documents from a list of loaders |
|
|
Parse MHTML files with BeautifulSoup. |
Load from Modern Treasury. |
|
Load MongoDB documents. |
|
|
Load news articles from URLs using Unstructured. |
Load Jupyter notebook (.ipynb) files. |
|
Load Notion directory dump. |
|
Load from Notion DB. |
|
|
Load from any file type using Nuclia Understanding API. |
Load from Huawei OBS directory. |
|
Load from the Huawei OBS file. |
|
Load Obsidian files from directory. |
|
Load OpenOffice ODT files using Unstructured. |
|
Load from Microsoft OneDrive. |
|
Load a file from Microsoft OneDrive. |
|
Load pages from OneNote notebooks. |
|
Load from Open City. |
|
Load Org-Mode files using Unstructured. |
|
Transcribe and parse audio files. |
|
|
Transcribe and parse audio files with OpenAI Whisper model. |
Transcribe and parse audio files. |
|
|
Loads a PDF with Azure Document Intelligence (formerly Forms Recognizer). |
Google Cloud Document AI parser. |
|
A dataclass to store Document AI parsing results. |
|
Parser that uses mime-types to parse a blob. |
|
Load article PDF files using Grobid. |
|
Exception raised when the Grobid server is unavailable. |
|
Pparse HTML files using Beautiful Soup. |
|
|
Code segmenter for COBOL. |
|
Abstract class for the code segmenter. |
|
Code segmenter for JavaScript. |
|
Parse using the respective programming language syntax. |
|
Code segmenter for Python. |
Parse the Microsoft Word documents from a blob. |
|
Send PDF files to Amazon Textract and parse them. |
|
|
Loads a PDF with Azure Document Intelligence (formerly Form Recognizer) and chunks at character level. |
Parse PDF using PDFMiner. |
|
Parse PDF with PDFPlumber. |
|
Parse PDF using PyMuPDF. |
|
Load PDF using pypdf |
|
Parse PDF with PyPDFium2. |
|
Parser for text blobs. |
|
Load PDF files from a local file system, HTTP or S3. |
|
|
Base Loader class for PDF files. |
Loads a PDF with Azure Document Intelligence |
|
|
Load PDF files using Mathpix service. |
|
Load online PDF. |
|
Load PDF files using PDFMiner. |
Load PDF files as HTML content using PDFMiner. |
|
|
Load PDF files using pdfplumber. |
|
Load PDF files using PyMuPDF. |
Load a directory with PDF files using pypdf and chunks at character level. |
|
|
Load PDF using pypdf into list of documents. |
|
Load PDF using pypdfium2 and chunks at character level. |
Load PDF files using Unstructured. |
|
|
Load Polars DataFrame. |
|
Load Microsoft PowerPoint files using Unstructured. |
Load from Psychic.dev. |
|
Load from the PubMed biomedical library. |
|
|
Load PySpark DataFrames. |
|
Load Python files, respecting any non-default encoding if specified. |
|
Load Quip pages. |
Load ReadTheDocs documentation directory. |
|
|
Load all child links from a URL page. |
Load Reddit posts. |
|
Load Roam files from a directory. |
|
Column not found error. |
|
Load from a Rockset database. |
|
|
Load content from RSpace notebooks, folders, documents or PDF Gallery files. |
|
Load news articles from RSS feeds using Unstructured. |
Load RST files using Unstructured. |
|
Load RTF files using Unstructured. |
|
Load from Amazon AWS S3 directory. |
|
|
Load from Amazon AWS S3 file. |
Load from SharePoint. |
|
|
Load a sitemap and its URLs. |
Load from a Slack directory dump. |
|
Load from Snowflake API. |
|
Load from Spreedly API. |
|
|
Load .srt (subtitle) files. |
|
Load from Stripe API. |
Load Telegram chat json directory dump. |
|
Load from Telegram chat dump. |
|
|
Load from Tencent Cloud COS directory. |
Load from Tencent Cloud COS file. |
|
|
Load from TensorFlow Dataset. |
|
Load text file. |
Load HTML using 2markdown API. |
|
|
Load TOML files. |
|
Load cards from a Trello board. |
Load TSV files using Unstructured. |
|
Load Twitter tweets. |
|
|
Load files using Unstructured API. |
|
Load files using Unstructured API. |
Base Loader that uses Unstructured. |
|
|
Load files using Unstructured. |
Load files using Unstructured. |
|
Load files from remote URLs using Unstructured. |
|
Abstract base class for all evaluators. |
|
Load HTML pages with Playwright and parse with Unstructured. |
|
|
Evaluates the page HTML content using the unstructured library. |
Load HTML pages with Selenium and parse with Unstructured. |
|
Load weather data with Open Weather Map API. |
|
Load HTML pages using urllib and parse them with `BeautifulSoup'. |
|
Load WhatsApp messages text file. |
|
Load from Wikipedia. |
|
Load DOCX file using docx2txt and chunks at character level. |
|
|
Load Microsoft Word file using Unstructured. |
Load XML file using Unstructured. |
|
Load Xorbits DataFrame. |
|
Generic Google API Client. |
|
Load all Videos from a YouTube Channel. |
|
|
Load YouTube transcripts. |
Functions¶
Fetch the mime types for the specified file types. |
|
Combine message information in a readable format ready to be used. |
|
Combine message information in a readable format ready to be used. |
|
Try to detect the file encoding. |
|
Combine cells information in a readable format ready to be used. |
|
Recursively remove newlines, no matter the data structure they are stored in. |
|
|
Extract text from images with RapidOCR. |
Get a parser by parser name. |
|
Default joiner for content columns. |
|
Combine message information in a readable format ready to be used. |
|
Convert a string or list of strings to a list of Documents with metadata. |
|
Retrieve a list of elements from the Unstructured API. |
|
|
Check if the installed Unstructured version exceeds the minimum version for the feature in question. |
|
Raise an error if the Unstructured version does not exceed the specified minimum. |
Combine message information in a readable format ready to be used. |
langchain_community.document_transformers
¶
Document Transformers are classes to transform Documents.
Document Transformers usually used to transform a lot of Documents in a single run.
Class hierarchy:
BaseDocumentTransformer --> <name> # Examples: DoctranQATransformer, DoctranTextTranslator
Main helpers:
Document
Classes¶
|
Transform HTML content by extracting specific tags and removing unwanted ones. |
|
Extract properties from text documents using doctran. |
|
Extract QA from text documents using doctran. |
|
Translate text documents using doctran. |
|
Perform K-means clustering on document vectors. |
|
Filter that drops redundant documents by comparing their embeddings. |
|
Translate text documents using Google Cloud Translation. |
Replace occurrences of a particular search pattern with a replacement string |
|
|
Lost in the middle: Performance degrades when models must access relevant information in the middle of long contexts. |
|
The Nuclia Understanding API splits into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all sentences. |
Extract metadata tags from document contents using OpenAI functions. |
Functions¶
|
Get all navigable strings from a BeautifulSoup element. |
|
Convert a list of documents to a list of documents with state. |
|
Create a DocumentTransformer that uses an OpenAI function chain to automatically |
langchain_community.embeddings
¶
Embedding models are wrappers around embedding models from different APIs and services.
Embedding models can be LLMs or not.
Class hierarchy:
Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
Classes¶
|
Aleph Alpha's asymmetric semantic embedding. |
The symmetric version of the Aleph Alpha's semantic embeddings. |
|
Embedding documents and queries with Awa DB. |
|
Azure OpenAI Embeddings API. |
|
Baidu Qianfan Embeddings embedding models. |
|
Bedrock embedding models. |
|
Bookend AI sentence_transformers embedding models. |
|
Clarifai embedding models. |
|
|
Cloudflare Workers AI embedding model. |
Cohere embedding models. |
|
DashScope embedding models. |
|
Wrapper around embeddings LLMs in Databricks. |
|
Deep Infra's embedding inference service. |
|
EdenAI embedding. |
|
Elasticsearch embedding models. |
|
Embaas's embedding service. |
|
Payload for the Embaas embeddings API. |
|
Ernie Embeddings V1 embedding models. |
|
Fake embedding model that always returns the same embedding vector for the same text. |
|
Fake embedding model. |
|
Qdrant FastEmbedding models. |
|
Google's PaLM Embeddings APIs. |
|
GPT4All embedding models. |
|
Gradient.ai Embedding models. |
|
|
Deprecated, TinyAsyncGradientEmbeddingClient was removed. |
HuggingFace BGE sentence_transformers embedding models. |
|
HuggingFace sentence_transformers embedding models. |
|
Embed texts using the HuggingFace API. |
|
Wrapper around sentence_transformers embedding models. |
|
HuggingFaceHub embedding models. |
|
Embedding models for self-hosted https://github.com/michaelfeil/infinity This should also work for text-embeddings-inference and other self-hosted openai-compatible servers. |
|
|
A helper tool to embed Infinity. |
Wrapper around embeddings LLMs in the Javelin AI Gateway. |
|
Jina embedding models. |
|
JohnSnowLabs embedding models |
|
llama.cpp embedding models. |
|
LLMRails embedding models. |
|
LocalAI embedding models. |
|
MiniMax's embedding service. |
|
Wrapper around embeddings LLMs in MLflow. |
|
Wrapper around embeddings LLMs in the MLflow AI Gateway. |
|
ModelScopeHub embedding models. |
|
MosaicML embedding service. |
|
NLP Cloud embedding models. |
|
OctoAI Compute Service embedding models. |
|
Ollama locally runs large language models. |
|
OpenAI embedding models. |
|
Content handler for LLM class. |
|
Custom Sagemaker Inference Endpoints. |
|
Custom embedding models on self-hosted remote hardware. |
|
|
HuggingFace embedding models on self-hosted remote hardware. |
|
HuggingFace InstructEmbedding models on self-hosted remote hardware. |
Embeddings by SpaCy models. |
|
TensorflowHub embedding models. |
|
Google Cloud VertexAI embedding models. |
|
Volcengine Embeddings embedding models. |
|
Voyage embedding models. |
|
Xinference embedding models. |
|
YandexGPT Embeddings models. |
Functions¶
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the embedding call. |
|
|
Load the embedding model. |
Use tenacity to retry the embedding call. |
langchain_community.graphs
¶
Graphs provide a natural language interface to graph databases.
Classes¶
ArangoDB wrapper for graph operations. |
|
|
FalkorDB wrapper for graph operations. |
Represents a graph document consisting of nodes and relationships. |
|
Represents a node in a graph with associated properties. |
|
Represents a directed relationship between two nodes in a graph. |
|
An abstract class wrapper for graph operations. |
|
|
HugeGraph wrapper for graph operations. |
|
Kùzu wrapper for graph operations. |
|
Memgraph wrapper for graph operations. |
|
NebulaGraph wrapper for graph operations. |
|
Neo4j wrapper for graph operations. |
|
Neptune wrapper for graph operations. |
A class to handle queries that fail to execute |
|
A triple in the graph. |
|
Networkx wrapper for entity graph operations. |
|
|
RDFlib wrapper for graph operations. |
Functions¶
Get the Arango DB client from credentials. |
|
|
Extract entities from entity string. |
Parse knowledge triples from the knowledge string. |
langchain_community.indexes
¶
Classes¶
|
An abstract base class representing the interface for a record manager. |
langchain_community.llms
¶
LLM classes provide access to the large language model (LLM) APIs and services.
Class hierarchy:
BaseLanguageModel --> BaseLLM --> LLM --> <name> # Examples: AI21, HuggingFaceHub, OpenAI
Main helpers:
LLMResult, PromptValue,
CallbackManagerForLLMRun, AsyncCallbackManagerForLLMRun,
CallbackManager, AsyncCallbackManager,
AIMessage, BaseMessage
Classes¶
AI21 large language models. |
|
Parameters for AI21 penalty data. |
|
Aleph Alpha large language models. |
|
Amazon API Gateway to access LLM models hosted on AWS. |
|
Adapter to prepare the inputs from Langchain to a format that LLM model expects. |
|
Anthropic large language models. |
|
Anyscale large language models. |
|
Aphrodite language model. |
|
Arcee's Domain Adapted Language Models (DALMs). |
|
Aviary hosted models. |
|
|
Aviary backend. |
AzureML Managed Endpoint client. |
|
Azure ML Online Endpoint models. |
|
Transform request and response of AzureML endpoint to match with required schema. |
|
Content handler for the Dolly-v2-12b model |
|
Content handler for GPT2 |
|
Content handler for LLMs from the HuggingFace catalog. |
|
Content formatter for LLaMa |
|
Deprecated: Kept for backwards compatibility |
|
Baidu Qianfan hosted open source or customized models. |
|
Banana large language models. |
|
Baseten model |
|
Beam API for gpt2 large language model. |
|
Bedrock models. |
|
Base class for Bedrock models. |
|
Adapter class to prepare the inputs from Langchain to a format that LLM model expects. |
|
NIBittensor LLMs |
|
CerebriumAI large language models. |
|
ChatGLM LLM service. |
|
Clarifai large language models. |
|
Langchain LLM class to help to access Cloudflare Workers AI service. |
|
Base class for Cohere models. |
|
Cohere large language models. |
|
C Transformers LLM models. |
|
CTranslate2 language model. |
|
Databricks serving endpoint or a cluster driver proxy app for LLM. |
|
DeepInfra models. |
|
Neural Magic DeepSparse LLM interface. |
|
Wrapper around edenai models. |
|
Fake LLM for testing purposes. |
|
Fake streaming list LLM for testing purposes. |
|
Fireworks models. |
|
ForefrontAI large language models. |
|
GigaChat large language models API. |
|
[Deprecated] DEPRECATED: Use langchain_google_genai.GoogleGenerativeAI instead. |
|
GooseAI large language models. |
|
GPT4All language models. |
|
Gradient.ai LLM Endpoints. |
|
Train result. |
|
HuggingFace Endpoint models. |
|
HuggingFaceHub models. |
|
HuggingFace Pipeline API. |
|
|
HuggingFace text generation API. |
It returns user input as the response. |
|
Javelin AI Gateway LLMs. |
|
Parameters for the Javelin AI Gateway LLM. |
|
Kobold API language model. |
|
llama.cpp model. |
|
HazyResearch's Manifest library. |
|
Wrapper around Minimax large language models. |
|
Common parameters for Minimax large language models. |
|
Wrapper around completions LLMs in MLflow. |
|
Wrapper around completions LLMs in the MLflow AI Gateway. |
|
Parameters for the MLflow AI Gateway LLM. |
|
Modal large language models. |
|
MosaicML LLM service. |
|
NLPCloud large language models. |
|
|
Base class for LLM deployed on OCI Data Science Model Deployment. |
|
OCI Data Science Model Deployment TGI Endpoint. |
|
VLLM deployed on OCI Data Science Model Deployment |
OctoAI LLM Endpoints. |
|
Ollama locally runs large language models. |
|
Raised when the Ollama endpoint is not found. |
|
An LLM wrapper that uses OpaquePrompts to sanitize prompts. |
|
Azure-specific OpenAI large language models. |
|
Base OpenAI large language model class. |
|
OpenAI large language models. |
|
OpenAI Chat large language models. |
|
Parameters for identifying a model as a typed dict. |
|
OpenLLM, supporting both in-process model instance and remote OpenLLM servers. |
|
OpenLM models. |
|
Langchain LLM class to help to access eass llm service. |
|
Petals Bloom models. |
|
PipelineAI large language models. |
|
Use your Predibase models with Langchain. |
|
Prediction Guard large language models. |
|
PromptLayer OpenAI large language models. |
|
Wrapper around OpenAI large language models. |
|
Replicate models. |
|
RWKV language models. |
|
A handler class to transform input from LLM to a format that SageMaker endpoint expects. |
|
Content handler for LLM class. |
|
A helper class for parsing the byte stream input. |
|
Sagemaker Inference Endpoint models. |
|
Model inference on self-hosted remote hardware. |
|
HuggingFace Pipeline API to run on self-hosted remote hardware. |
|
StochasticAI large language models. |
|
Nebula Service models. |
|
Text generation models from WebUI. |
|
Wrapper around Titan Takeoff APIs. |
|
Titan Takeoff Pro is a language model that can be used to generate text. |
|
LLM models from Together. |
|
Tongyi Qwen large language models. |
|
Google Vertex AI large language models. |
|
Large language models served from Vertex AI Model Garden. |
|
VLLM language model. |
|
vLLM OpenAI-compatible API client |
|
Base class for VolcEngineMaas models. |
|
volc engine maas hosts a plethora of models. |
|
IBM watsonx.ai large language models. |
|
Writer large language models. |
|
Xinference large-scale model inference service. |
|
Yandex large language models. |
Functions¶
|
Create the LLMResult from the choices and prompts. |
|
Update token usage. |
|
Get completions from Aviary models. |
List available models |
|
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
Gets the default Databricks personal access token. |
|
Gets the default Databricks workspace hostname. |
|
Gets the notebook REPL context if running inside a Databricks notebook. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call for streaming. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the completion call. |
|
Conditionally apply a decorator. |
|
|
Use tenacity to retry the completion call. |
Remove trailing slash and /api from url if present. |
|
|
Load LLM from file. |
Load LLM from Config Dict. |
|
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
|
Update token usage. |
Use tenacity to retry the completion call. |
|
|
Generate text from the model. |
|
Because the dashscope SDK doesn't provide an async API, we wrap stream_generate_with_retry with an async generator. |
Check the response from the completion call. |
|
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
|
Cut off the text as soon as any stop words occur. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
|
Returns True if the model name is a Codey model. |
|
Returns True if the model name is a Gemini model. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
langchain_community.retrievers
¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
Document retriever for Arcee's Domain Adapted Language Models (DALMs). |
|
Arxiv retriever. |
|
|
Azure Cognitive Search service retriever. |
Amazon Bedrock Knowledge Bases retrieval. |
|
Configuration for retrieval. |
|
Configuration for vector search. |
|
BM25 retriever without Elasticsearch. |
|
Chaindesk API retriever. |
|
ChatGPT plugin retriever. |
|
Cohere Chat API with RAG. |
|
Databerry API retriever. |
|
DocArray Document Indices retriever. |
|
|
Enumerator of the types of search to perform. |
Elasticsearch retriever that uses BM25. |
|
Embedchain retriever. |
|
|
A retriever based on Document AI Warehouse. |
|
Google Vertex Search API retriever alias for backwards compatibility. |
|
Google Vertex AI Search retriever for multi-turn conversations. |
|
Google Vertex AI Search retriever. |
Retriever for Kay.ai datasets. |
|
Additional result attribute. |
|
Value of an additional result attribute. |
|
Amazon Kendra Index retriever. |
|
Document attribute. |
|
Value of a document attribute. |
|
Information that highlights the keywords in the excerpt. |
|
Amazon Kendra Query API search result. |
|
Query API result item. |
|
Base class of a result item. |
|
Amazon Kendra Retrieve API search result. |
|
Retrieve API result item. |
|
Text with highlights. |
|
KNN retriever. |
|
LlamaIndex graph data structure retriever. |
|
LlamaIndex retriever. |
|
Metal API retriever. |
|
Milvus API retriever. |
|
Retriever for Outline API. |
|
|
Pinecone Hybrid Search retriever. |
PubMed API retriever. |
|
|
Qdrant sparse vector retriever. |
LangChain API retriever. |
|
SVM retriever. |
|
Search depth as enumerator. |
|
Tavily Search API retriever. |
|
TF-IDF retriever. |
|
Vespa retriever. |
|
|
Weaviate hybrid search retriever. |
Wikipedia API retriever. |
|
You retriever that uses You.com's search API. |
|
|
Which documents to search. |
|
Enumerator of the types of search to perform. |
Zep MemoryStore Retriever. |
|
Zilliz API retriever. |
Functions¶
|
Clean an excerpt from Kendra. |
Combine a ResultItem title and excerpt into a single string. |
|
|
Create an index of embeddings for a list of contexts. |
|
Deprecated MilvusRetreiver. |
Create an index from a list of contexts. |
|
Hash a text using SHA256. |
|
|
Create an index of embeddings for a list of contexts. |
|
Deprecated ZillizRetreiver. |
langchain_community.storage
¶
Implementations of key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
Raised when a key is invalid; e.g., uses incorrect characters. |
|
|
BaseStore implementation using Redis as the underlying store. |
BaseStore implementation using Upstash Redis as the underlying store to store raw bytes. |
|
|
[Deprecated] BaseStore implementation using Upstash Redis as the underlying store to store strings. |
langchain_community.tools
¶
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the right tool for the job.
Class hierarchy:
ToolMetaclass --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
Main helpers:
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
Classes¶
Tool for app operations. |
|
Type of app operation as enumerator. |
|
Schema for app operations. |
|
Base class for the AINetwork tools. |
|
|
Type of operation as enumerator. |
Tool for owner operations. |
|
Schema for owner operations. |
|
Tool for owner operations. |
|
Schema for owner operations. |
|
Tool for transfer operations. |
|
Schema for transfer operations. |
|
Tool for value operations. |
|
Schema for value operations. |
|
Base Tool for Amadeus. |
|
Tool for finding the closest airport to a particular location. |
|
Schema for the AmadeusClosestAirport tool. |
|
Tool for searching for a single flight between two airports. |
|
Schema for the AmadeusFlightSearch tool. |
|
Input for the Arxiv tool. |
|
Tool that searches the Arxiv API. |
|
|
Tool that queries the Azure Cognitive Services Form Recognizer API. |
|
Tool that queries the Azure Cognitive Services Image Analysis API. |
|
Tool that queries the Azure Cognitive Services Speech2Text API. |
|
Tool that queries the Azure Cognitive Services Text2Speech API. |
|
Tool that queries the Azure Cognitive Services Text Analytics for Health API. |
Tool for evaluating python code in a sandbox environment. |
|
Arguments for the BearlyInterpreterTool. |
|
Information about a file to be uploaded. |
|
Tool that queries the Bing Search API and gets back json. |
|
Tool that queries the Bing search API. |
|
Tool that queries the BraveSearch. |
|
Tool that queries the Clickup API. |
|
Tool that queries the DataForSeo Google Search API and get back json. |
|
Tool that queries the DataForSeo Google search API. |
|
Input for the DuckDuckGo search tool. |
|
Tool that queries the DuckDuckGo search API and gets back json. |
|
Tool that queries the DuckDuckGo search API. |
|
Tool for running python code in a sandboxed environment for data analysis. |
|
Arguments for the E2BDataAnalysisTool. |
|
Description of the uploaded path with its remote path. |
|
Methods in this class recursively traverse an AST and output source code for the abstract syntax; original formatting is disregarded. |
|
Tool that queries the Eden AI Speech To Text API. |
|
Tool that queries the Eden AI Text to speech API. |
|
the base tool for all the EdenAI Tools . |
|
Tool that queries the Eden AI Explicit image detection. |
|
|
Tool that queries the Eden AI Object detection API. |
Tool that queries the Eden AI Identity parsing API. |
|
Tool that queries the Eden AI Invoice parsing API. |
|
Tool that queries the Eden AI Explicit text detection. |
|
Models available for Eleven Labs Text2Speech. |
|
Models available for Eleven Labs Text2Speech. |
|
Tool that queries the Eleven Labs Text2Speech API. |
|
Tool that copies a file. |
|
Input for CopyFileTool. |
|
Tool that deletes a file. |
|
Input for DeleteFileTool. |
|
Input for FileSearchTool. |
|
Tool that searches for files in a subdirectory that match a regex pattern. |
|
Input for ListDirectoryTool. |
|
Tool that lists files and directories in a specified folder. |
|
Input for MoveFileTool. |
|
Tool that moves a file. |
|
Input for ReadFileTool. |
|
Tool that reads a file. |
|
Mixin for file system tools. |
|
Error for paths outside the root directory. |
|
Input for WriteFileTool. |
|
Tool that writes a file to disk. |
|
Tool for interacting with the GitHub API. |
|
Tool for interacting with the GitLab API. |
|
Base class for Gmail tools. |
|
Input for CreateDraftTool. |
|
Tool that creates a draft email for Gmail. |
|
Tool that gets a message by ID from Gmail. |
|
Input for GetMessageTool. |
|
Input for GetMessageTool. |
|
Tool that gets a thread by ID from Gmail. |
|
Tool that searches for messages or threads in Gmail. |
|
|
Enumerator of Resources to search. |
Input for SearchGmailTool. |
|
Tool that sends a message to Gmail. |
|
Input for SendMessageTool. |
|
Tool that adds the capability to query using the Golden API and get back JSON. |
|
Tool that queries the Google Cloud Text to Speech API. |
|
Tool that queries the Google Finance API. |
|
Tool that queries the Google Jobs API. |
|
Tool that queries the Google Lens API. |
|
Input for GooglePlacesTool. |
|
Tool that queries the Google places API. |
|
Tool that queries the Google search API. |
|
Tool that queries the Google Search API and gets back json. |
|
Tool that queries the Google search API. |
|
Tool that queries the Serper.dev Google Search API and get back json. |
|
Tool that queries the Serper.dev Google search API. |
|
Tool that queries the Google trends API. |
|
Base tool for querying a GraphQL API. |
|
Tool that asks user for input. |
|
IFTTT Webhook. |
|
Tool that queries the Atlassian Jira API. |
|
Tool for getting a value in a JSON spec. |
|
Tool for listing keys in a JSON spec. |
|
Base class for JSON spec. |
|
Tool that trains a language model. |
|
|
Protocol for trainable language models. |
Tool that searches the Merriam-Webster API. |
|
Tool that queries the Metaphor Search API and gets back json. |
|
Input for UpdateSessionTool. |
|
Tool that closes an existing Multion Browser Window with provided fields. |
|
Input for CreateSessionTool. |
|
Tool that creates a new Multion Browser Window with provided fields. |
|
Tool that updates an existing Multion Browser Window with provided fields. |
|
Input for UpdateSessionTool. |
|
Tool that queries the Atlassian Jira API. |
|
Input for Nuclia Understanding API. |
|
Tool to process files with the Nuclia Understanding API. |
|
Base class for the Office 365 tools. |
|
|
Input for SendMessageTool. |
Tool for creating a draft email in Office 365. |
|
Class for searching calendar events in Office 365 |
|
Input for SearchEmails Tool. |
|
Class for searching email messages in Office 365 |
|
Input for SearchEmails Tool. |
|
Tool for sending calendar events in Office 365. |
|
Input for CreateEvent Tool. |
|
Tool for sending an email in Office 365. |
|
Input for SendMessageTool. |
|
A model for a single API operation. |
|
A model for a property in the query, path, header, or cookie params. |
|
Base model for an API property. |
|
The location of the property. |
|
A model for a request body. |
|
A model for a request body property. |
|
Tool that queries the OpenWeatherMap API. |
|
Base class for browser tools. |
|
Tool for clicking on an element with the given CSS selector. |
|
Input for ClickTool. |
|
Tool for getting the URL of the current webpage. |
|
Extract all hyperlinks on the page. |
|
|
Input for ExtractHyperlinksTool. |
Tool for extracting all the text on the current webpage. |
|
Tool for getting elements in the current web page matching a CSS selector. |
|
Input for GetElementsTool. |
|
Tool for navigating a browser to a URL. |
|
Input for NavigateToolInput. |
|
Navigate back to the previous page in the browser history. |
|
AI Plugin Definition. |
|
Tool for getting the OpenAPI spec for an AI Plugin. |
|
Schema for AIPluginTool. |
|
API Configuration. |
|
Tool for getting metadata about a PowerBI Dataset. |
|
Tool for getting tables names. |
|
Tool for querying a Power BI Dataset. |
|
Tool that searches the PubMed API. |
|
Tool that queries for posts on a subreddit. |
|
Input for Reddit search. |
|
Base class for requests tools. |
|
Tool for making a DELETE request to an API endpoint. |
|
Tool for making a GET request to an API endpoint. |
|
Tool for making a PATCH request to an API endpoint. |
|
Tool for making a POST request to an API endpoint. |
|
Tool for making a PUT request to an API endpoint. |
|
Input for SceneXplain. |
|
Tool that explains images. |
|
Tool that queries the SearchApi.io search API and returns JSON. |
|
Tool that queries the SearchApi.io search API. |
|
Tool that queries a Searx instance and gets back json. |
|
Tool that queries a Searx instance. |
|
Tool that searches the semanticscholar API. |
|
Input for the SemanticScholar tool. |
|
Commands for the Bash Shell tool. |
|
Tool to run shell commands. |
|
Base class for Slack tools. |
|
Tool that gets Slack channel information. |
|
Tool that gets Slack messages. |
|
Input schema for SlackGetMessages. |
|
Input for ScheduleMessageTool. |
|
Tool for scheduling a message in Slack. |
|
Input for SendMessageTool. |
|
Tool for sending a message in Slack. |
|
Input for CopyFileTool. |
|
Tool that adds the capability to sleep. |
|
Base tool for interacting with Spark SQL. |
|
Tool for getting metadata about a Spark SQL. |
|
Tool for getting tables names. |
|
Use an LLM to check if a query is correct. |
|
Tool for querying a Spark SQL. |
|
Base tool for interacting with a SQL database. |
|
Tool for getting metadata about a SQL database. |
|
Tool for getting tables names. |
|
Use an LLM to check if a query is correct. |
|
Tool for querying a SQL database. |
|
Tool that uses StackExchange |
|
Tool that searches the Steam Web API. |
|
Supported Image Models for generation. |
|
|
Tool used to generate images from a text-prompt. |
Tool that queries the Tavily Search API and gets back an answer. |
|
Input for the Tavily tool. |
|
Tool that queries the Tavily Search API and gets back json. |
|
Base class for tools that use a VectorStore. |
|
Tool for the VectorDBQA chain. |
|
Tool for the VectorDBQAWithSources chain. |
|
Tool that searches the Wikipedia API. |
|
Tool that queries using the Wolfram Alpha SDK. |
|
Tool that searches financial news on Yahoo Finance. |
|
Tool that queries YouTube. |
|
Returns a list of all exposed (enabled) actions associated with |
|
Executes an action that is identified by action_id, must be exposed |
Functions¶
|
Authenticate using the AIN Blockchain |
Authenticate using the Amadeus API |
|
|
Detect if the file is local or remote. |
|
Download audio from url to local. |
Convert a file to base64. |
|
|
Get the first n lines of a file. |
|
Strip markdown code from a string. |
|
Format tool into the OpenAI function API. |
Format tool into the OpenAI function API. |
|
Deprecated. |
|
Add print statement to the last line if it's missing. |
|
Call f on each item in seq, calling inter() in between. |
|
Parse a file and pretty-print it to output. |
|
|
Resolve a relative path, raising an error if not within the root directory. |
Check if path is relative to root. |
|
Build a Gmail service. |
|
Clean email body. |
|
Get credentials. |
|
Import google libraries. |
|
Import googleapiclient.discovery.build function. |
|
Import InstalledAppFlow class. |
|
Tool for asking the user for input. |
|
Authenticate using the Microsoft Grah API |
|
Clean body of a message or event. |
|
Lazy import playwright browsers. |
|
Asynchronously get the current page of the browser. |
|
|
Create an async playwright browser. |
|
Create a playwright browser. |
Get the current page of the browser. |
|
Run an async coroutine. |
|
Convert the yaml or json serialized spec to a dict. |
|
Format tool into the OpenAI function API. |
|
Format tool into the OpenAI function API. |
|
Authenticate using the Slack API. |
|
|
Upload a block to a signed URL and return the public URL. |
langchain_community.utilities
¶
Utilities are the integrations with third-part systems and packages.
Other LangChain classes use Utilities to interact with third-part systems and packages.
Classes¶
Wrapper for AlphaVantage API for Currency Exchange Rate. |
|
Wrapper around Apify. |
|
Arcee document. |
|
Adapter for Arcee documents |
|
Source of an Arcee document. |
|
|
Routes available for the Arcee API as enumerator. |
|
Wrapper for Arcee API. |
Filters available for a DALM retrieval and generation. |
|
|
Filter types available for a DALM retrieval as enumerator. |
Wrapper around ArxivAPI. |
|
Wrapper for AWS Lambda SDK. |
|
Wrapper around bibtexparser. |
|
Wrapper for Bing Search API. |
|
Wrapper around the Brave search engine. |
|
|
Component class for a list. |
Wrapper for Clickup API. |
|
Base class for all components. |
|
|
Component class for a member. |
|
Component class for a space. |
|
Class for a task. |
|
Component class for a team. |
Wrapper for OpenAI's DALL-E Image Generator. |
|
Wrapper around the DataForSeo API. |
|
Wrapper for DuckDuckGo Search API. |
|
Wrapper for GitHub API. |
|
Wrapper for GitLab API. |
|
Wrapper for Golden. |
|
Wrapper for SerpApi's Google Finance API |
|
Wrapper for SerpApi's Google Scholar API |
|
Wrapper for SerpApi's Google Lens API |
|
Wrapper around Google Places API. |
|
Wrapper for Google Scholar API |
|
Wrapper for Google Search API. |
|
Wrapper around the Serper.dev Google Search API. |
|
Wrapper for SerpApi's Google Scholar API |
|
Wrapper around GraphQL API. |
|
Wrapper for Jira API. |
|
Interface for querying Alibaba Cloud MaxCompute tables. |
|
Wrapper for Merriam-Webster. |
|
Wrapper for Metaphor Search API. |
|
Wrapper for NASA API. |
|
|
Enumerator of the HTTP verbs. |
OpenAPI Model that removes mis-formatted parts of the spec. |
|
Wrapper for OpenWeatherMap API using PyOWM. |
|
Wrapper around OutlineAPI. |
|
Portkey configuration. |
|
Create PowerBI engine from dataset ID and credential or token. |
|
Wrapper around PubMed API. |
|
Simulates a standalone Python REPL. |
|
Wrapper for Reddit API |
|
|
Escape punctuation within an input string. |
Wrapper around requests to handle auth and async. |
|
alias of |
|
Lightweight wrapper around requests library. |
|
Wrapper for SceneXplain API. |
|
Wrapper around SearchApi API. |
|
Dict like wrapper around search api results. |
|
Wrapper for Searx API. |
|
Wrapper around semanticscholar.org API. |
|
Context manager to hide prints. |
|
Wrapper around SerpAPI. |
|
|
SparkSQL is a utility class for interacting with Spark SQL. |
|
SQLAlchemy wrapper around a database. |
Wrapper for Stack Exchange API. |
|
Wrapper for Steam API. |
|
Wrapper for Tavily Search API. |
|
Access to the TensorFlow Datasets. |
|
Messaging Client using Twilio. |
|
Wrapper around WikipediaAPI. |
|
Wrapper for Wolfram Alpha. |
|
Wrapper for Zapier NLA. |
Functions¶
Get the number of tokens in a string of text. |
|
Get the token ids for a string of text. |
|
|
Extract elements from a dictionary. |
|
Fetch data from a URL. |
|
Fetch the first id from a dictionary. |
|
Fetch the folder id. |
|
Fetch the list id. |
|
Fetch the space id. |
|
Fetch the team id. |
|
Attempts to parse a JSON string and return the parsed object. |
Parse a dictionary by creating a component and then turning it back into a dictionary. |
|
Restore the original sensitive data from the sanitized text. |
|
Sanitize input string or dict of strings by replacing sensitive data with placeholders. |
|
Add single quotes around table names that contain spaces. |
|
|
Converts a JSON object to a markdown table. |
Check if the correct Redis modules are installed. |
|
|
Get a redis client from the connection url given. |
|
Truncate a string to a certain number of words, based on the max string length. |
Creates a retry decorator for Vertex / Palm LLMs. |
|
|
Returns a custom user agent header. |
|
Init vertexai. |
Loads im Image from GCS. |
|
Raise ImportError related to Vertex SDK being not available. |
langchain_community.utils
¶
Utility functions for LangChain.
Classes¶
Representation of a callable function to the Ernie API. |
|
Representation of a callable function to the Ernie API. |
|
Representation of a callable function to the OpenAI API. |
|
Representation of a callable function to the OpenAI API. |
Functions¶
|
Converts a Pydantic model to a function description for the Ernie API. |
Converts a Pydantic model to a function description for the Ernie API. |
|
Row-wise cosine similarity between two equal-width matrices. |
|
|
Row-wise cosine similarity with optional top-k and score threshold filtering. |
Return whether OpenAI API is v1 or more. |
|
|
Converts a Pydantic model to a function description for the OpenAI API. |
Converts a Pydantic model to a function description for the OpenAI API. |
langchain_community.vectorstores
¶
Vector store stores embedded data and performs vector search.
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then query the store and retrieve the data that are ‘most similar’ to the embedded query.
Class hierarchy:
VectorStore --> <name> # Examples: Annoy, FAISS, Milvus
BaseRetriever --> VectorStoreRetriever --> <name>Retriever # Example: VespaRetriever
Main helpers:
Embeddings, Document
Classes¶
|
Alibaba Cloud OpenSearch vector store. |
|
Alibaba Cloud Opensearch` client configuration. |
|
AnalyticDB (distributed PostgreSQL) vector store. |
|
Annoy vector store. |
|
Wrapper around DataStax Astra DB for vector-store workloads. |
|
Atlas vector store. |
|
AwaDB vector store. |
Azure Cosmos DB for MongoDB vCore vector store. |
|
Cosmos DB Similarity Type as enumerator. |
|
|
Azure Cognitive Search vector store. |
Retriever that uses Azure Cognitive Search. |
|
|
|
Baidu Elasticsearch vector store. |
|
|
Google Cloud BigQuery vector store. |
|
Wrapper around Apache Cassandra(R) for vector-store workloads. |
|
ChromaDB vector store. |
|
Clarifai AI vector store. |
|
ClickHouse VectorSearch vector store. |
ClickHouse client configuration. |
|
DashVector vector store. |
|
|
Databricks Vector Search vector store. |
Activeloop Deep Lake vector store. |
|
|
Dingo vector store. |
Base class for DocArray based vector stores. |
|
HnswLib storage using DocArray package. |
|
In-memory DocArray storage for exact search. |
|
[Deprecated] [DEPRECATED] Elasticsearch with k-nearest neighbor search (k-NN) vector store. |
|
ElasticVectorSearch uses the brute force method of searching on vectors. |
|
Approximate retrieval strategy using the HNSW algorithm. |
|
Base class for Elasticsearch retrieval strategies. |
|
Elasticsearch vector store. |
|
Exact retrieval strategy using the script_score query. |
|
Sparse retrieval strategy using the text_expansion processor. |
|
|
Wrapper around Epsilla vector database. |
|
Meta Faiss vector store. |
|
Hippo vector store. |
|
Hologres API vector store. |
|
Jaguar API vector store. |
|
LanceDB vector store. |
Implementation of Vector Store using LLMRails. |
|
Retriever for LLMRails. |
|
|
Marqo vector store. |
Google Vertex AI Vector Search (previously Matching Engine) vector store. |
|
|
Meilisearch vector store. |
|
Milvus vector store. |
Momento Vector Index (MVI) vector store. |
|
MongoDB Atlas Vector Search vector store. |
|
|
MyScale vector store. |
MyScale client configuration. |
|
MyScale vector store without metadata column |
|
|
Neo4j vector index. |
Enumerator of the Distance strategies. |
|
|
NucliaDB vector store. |
|
Amazon OpenSearch Vector Engine vector store. |
|
Base model for all SQL stores. |
Collection store. |
|
|
Embedding store. |
|
Postgres with the pg_embedding extension as a vector store. |
Result from a query. |
|
|
VectorStore backed by pgvecto_rs. |
|
Base model for the SQL stores. |
Enumerator of the Distance strategies. |
|
|
Postgres/PGVector vector store. |
|
Pinecone vector store. |
|
Qdrant vector store. |
Qdrant related exceptions. |
|
|
Redis vector database. |
Retriever for Redis VectorStore. |
|
Collection of RedisFilterFields. |
|
A logical expression of RedisFilterFields. |
|
Base class for RedisFilterFields. |
|
RedisFilterOperator enumerator is used to create RedisFilterExpressions. |
|
A RedisFilterField representing a numeric field in a Redis index. |
|
A RedisFilterField representing a tag in a Redis index. |
|
A RedisFilterField representing a text field in a Redis index. |
|
Schema for flat vector fields in Redis. |
|
Schema for HNSW vector fields in Redis. |
|
Schema for numeric fields in Redis. |
|
Distance metrics for Redis vector fields. |
|
Base class for Redis fields. |
|
Schema for Redis index. |
|
Base class for Redis vector fields. |
|
Schema for tag fields in Redis. |
|
Schema for text fields in Redis. |
|
|
Rockset vector store. |
|
ScaNN vector store. |
|
SemaDB vector store. |
SingleStore DB vector store. |
|
|
Base class for serializing data. |
|
Serializes data in binary json using the bson python package. |
|
Serializes data in json using the json package from python standard library. |
Serializes data in Apache Parquet format using the pyarrow package. |
|
Simple in-memory vector store based on the scikit-learn library NearestNeighbors. |
|
Exception raised by SKLearnVectorStore. |
|
|
Wrapper around SQLite with vss extension as a vector database. |
|
StarRocks vector store. |
StarRocks client configuration. |
|
Supabase Postgres vector store. |
|
SurrealDB as Vector Store. |
|
|
Tair vector store. |
Tencent vector DB Connection params. |
|
Tencent vector DB Index params. |
|
Tencent VectorDB as a vector store. |
|
|
Tigris vector store. |
|
TileDB vector store. |
Timescale Postgres vector store |
|
|
Typesense vector store. |
|
USearch vector store. |
|
Enumerator of the Distance strategies for calculating distances between vectors. |
|
Wrapper around Vald vector database. |
|
Initialize vearch vector store flag 1 for cluster,0 for standalone |
|
is_enabled: True if MMR is enabled, False otherwise mmr_k: number of results to fetch for MMR, defaults to 50 diversity_bias: number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to minimum diversity and 1 to maximum diversity. Defaults to 0.3. Note: diversity_bias is equivalent 1-lambda_mult where lambda_mult is the value often used in max_marginal_relevance_search() We chose to use that since we believe it's more intuitive to the user. |
is_enabled: True if summary is enabled, False otherwise max_results: maximum number of results to summarize response_lang: requested language for the summary |
|
|
Vectara API vector store. |
k: Number of Documents to return. Defaults to 10. lambda_val: lexical match parameter for hybrid search. filter Dictionary of argument(s) to filter on metadata. For example a filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see https://docs.vectara.com/docs/search-apis/sql/filter-overview for more details. score_threshold: minimal score threshold for the result. If defined, results with score less than this value will be filtered out. n_sentence_context: number of sentences before/after the matching segment to add, defaults to 2 mmr_config: MMRConfig configuration dataclass summary_config: SummaryConfig configuration dataclass. |
|
Retriever class for Vectara. |
|
|
Vespa vector store. |
|
Weaviate vector store. |
|
Xata vector store. |
Wrapper around Yellowbrick as a vector database. |
|
|
Configuration for a Zep Collection. |
|
Zep vector store. |
|
Zilliz vector store. |
Functions¶
|
Create metadata from fields. |
Import annoy if available, otherwise raise error. |
|
|
Check if a string contains multiple substrings. |
Import faiss if available, otherwise raise error. |
|
|
Check if a string contains multiple substrings. |
Check if the values are not None or empty string |
|
Remove Lucene special characters |
|
Sort first element to match the index_name if exists |
|
Decorator to call the synchronous method of the class if the async method is not implemented. |
|
Check if Redis index exists. |
|
Decorator to check for misuse of equality operators. |
|
Reads in the index schema from a dict or yaml file. |
|
Import scann if available, otherwise raise error. |
|
Normalize vectors to unit length. |
|
Print a debug message if DEBUG is True. |
|
Get a named result from a query. |
|
|
Check if a string has multiple substrings. |
Import tiledb-vector-search if available, otherwise raise error. |
|
Get the URI of the documents array. |
|
|
Get the URI of the documents array from group. |
Get the URI of the vector index. |
|
Get the URI of the vector index. |
|
Import usearch if available, otherwise raise error. |
|
Filter out metadata types that are not supported for a vector store. |
|
Calculate maximal marginal relevance. |