langchain API Reference¶

langchain.agents¶

Agent is a class that uses an LLM to choose a sequence of actions to take.

In Chains, a sequence of actions is hardcoded. In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order.

Agents select and use Tools and Toolkits for actions.

Class hierarchy:

BaseSingleActionAgent --> LLMSingleActionAgent
                          OpenAIFunctionsAgent
                          XMLAgent
                          Agent --> <name>Agent  # Examples: ZeroShotAgent, ChatAgent


BaseMultiActionAgent  --> OpenAIMultiFunctionsAgent

Main helpers:

AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish

Classes¶

agents.agent.Agent

Agent that calls the language model and deciding the action.

agents.agent.AgentExecutor

Agent that is using tools.

agents.agent.AgentOutputParser

Base class for parsing agent output into agent action/finish.

agents.agent.BaseMultiActionAgent

Base Multi Action Agent class.

agents.agent.BaseSingleActionAgent

Base Single Action Agent class.

agents.agent.ExceptionTool

Tool that just returns the query.

agents.agent.LLMSingleActionAgent

Base class for single action agents.

agents.agent.MultiActionAgentOutputParser

Base class for parsing agent output into agent actions/finish.

agents.agent.RunnableAgent

Agent powered by runnables.

agents.agent.RunnableMultiActionAgent

Agent powered by runnables.

agents.agent_iterator.AgentExecutorIterator(...)

Iterator for AgentExecutor.

agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo

Information about a VectorStore.

agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit

Toolkit for routing between Vector Stores.

agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit

Toolkit for interacting with a Vector Store.

agents.agent_types.AgentType(value[, names, ...])

An enum for agent types.

agents.chat.base.ChatAgent

Chat Agent.

agents.chat.output_parser.ChatOutputParser

Output parser for the chat agent.

agents.conversational.base.ConversationalAgent

An agent that holds a conversation in addition to using tools.

agents.conversational.output_parser.ConvoOutputParser

Output parser for the conversational agent.

agents.conversational_chat.base.ConversationalChatAgent

An agent designed to hold a conversation in addition to using tools.

agents.conversational_chat.output_parser.ConvoOutputParser

Output parser for the conversational agent.

agents.mrkl.base.ChainConfig(action_name, ...)

Configuration for chain to use in MRKL system.

agents.mrkl.base.MRKLChain

[Deprecated] Chain that implements the MRKL system.

agents.mrkl.base.ZeroShotAgent

Agent for the MRKL chain.

agents.mrkl.output_parser.MRKLOutputParser

MRKL Output parser for the chat agent.

agents.openai_assistant.base.OpenAIAssistantAction

AgentAction with info needed to submit custom tool output to existing run.

agents.openai_assistant.base.OpenAIAssistantFinish

AgentFinish with run and thread metadata.

agents.openai_assistant.base.OpenAIAssistantRunnable

Run an OpenAI Assistant.

agents.openai_functions_agent.agent_token_buffer_memory.AgentTokenBufferMemory

Memory used to save agent output AND intermediate steps.

agents.openai_functions_agent.base.OpenAIFunctionsAgent

An Agent driven by OpenAIs function powered API.

agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent

An Agent driven by OpenAIs function powered API.

agents.output_parsers.json.JSONAgentOutputParser

Parses tool invocations and final answers in JSON format.

agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser

Parses a message into agent action/finish.

agents.output_parsers.openai_tools.OpenAIToolAgentAction

Override init to support instantiation by position for backward compat.

agents.output_parsers.openai_tools.OpenAIToolsAgentOutputParser

Parses a message into agent actions/finish.

agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser

Parses ReAct-style LLM calls that have a single tool input in json format.

agents.output_parsers.react_single_input.ReActSingleInputOutputParser

Parses ReAct-style LLM calls that have a single tool input.

agents.output_parsers.self_ask.SelfAskOutputParser

Parses self-ask style LLM calls.

agents.output_parsers.xml.XMLAgentOutputParser

Parses tool invocations and final answers in XML format.

agents.react.base.DocstoreExplorer(docstore)

Class to assist with exploration of a document store.

agents.react.base.ReActChain

[Deprecated] Chain that implements the ReAct paper.

agents.react.base.ReActDocstoreAgent

Agent for the ReAct chain.

agents.react.base.ReActTextWorldAgent

Agent for the ReAct TextWorld chain.

agents.react.output_parser.ReActOutputParser

Output parser for the ReAct agent.

agents.schema.AgentScratchPadChatPromptTemplate

Chat prompt template for the agent scratchpad.

agents.self_ask_with_search.base.SelfAskWithSearchAgent

Agent for the self-ask-with-search paper.

agents.self_ask_with_search.base.SelfAskWithSearchChain

[Deprecated] Chain that does self-ask with search.

agents.structured_chat.base.StructuredChatAgent

Structured Chat Agent.

agents.structured_chat.output_parser.StructuredChatOutputParser

Output parser for the structured chat agent.

agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries

Output parser with retries for the structured chat agent.

agents.tools.InvalidTool

Tool that is run when invalid tool name is encountered by agent.

agents.xml.base.XMLAgent

Agent that uses XML tags.

Functions¶

agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent(...)

A convenience method for creating a conversational retrieval agent.

agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)

Construct a VectorStore agent from an LLM and tools.

agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)

Construct a VectorStore router agent from an LLM and tools.

agents.format_scratchpad.log.format_log_to_str(...)

Construct the scratchpad that lets the agent continue its thought process.

agents.format_scratchpad.log_to_messages.format_log_to_messages(...)

Construct the scratchpad that lets the agent continue its thought process.

agents.format_scratchpad.openai_functions.format_to_openai_function_messages(...)

Convert (AgentAction, tool output) tuples into FunctionMessages.

agents.format_scratchpad.openai_functions.format_to_openai_functions(...)

Convert (AgentAction, tool output) tuples into FunctionMessages.

agents.format_scratchpad.openai_tools.format_to_openai_tool_messages(...)

Convert (AgentAction, tool output) tuples into FunctionMessages.

agents.format_scratchpad.xml.format_xml(...)

Format the intermediate steps as XML.

agents.initialize.initialize_agent(tools, llm)

Load an agent executor given tools and LLM.

agents.load_tools.get_all_tool_names()

Get a list of all possible tool names.

agents.load_tools.load_huggingface_tool(...)

Loads a tool from the HuggingFace Hub.

agents.load_tools.load_tools(tool_names[, ...])

Load tools based on their name.

agents.loading.load_agent(path, **kwargs)

Unified method for loading an agent from LangChainHub or local fs.

agents.loading.load_agent_from_config(config)

Load agent from Config Dict.

agents.output_parsers.openai_tools.parse_ai_message_to_openai_tool_action(message)

Parse an AI message potentially containing tool_calls.

agents.utils.validate_tools_single_input(...)

Validate tools for single input.

langchain.callbacks¶

Callback handlers allow listening to events in LangChain.

Class hierarchy:

BaseCallbackHandler --> <name>CallbackHandler  # Example: AimCallbackHandler

Classes¶

callbacks.file.FileCallbackHandler(filename)

Callback Handler that writes to a file.

callbacks.streaming_aiter.AsyncIteratorCallbackHandler()

Callback handler that returns an async iterator.

callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)

Callback handler that returns an async iterator.

callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)

Callback handler for streaming in agents.

callbacks.tracers.logging.LoggingCallbackHandler(logger)

Tracer that logs via the input Logger.

langchain.chains¶

Chains are easily reusable components linked together.

Chains encode a sequence of calls to components like models, document retrievers, other Chains, etc., and provide a simple interface to this sequence.

The Chain interface makes it easy to create apps that are:

  • Stateful: add Memory to any Chain to give it state,

  • Observable: pass Callbacks to a Chain to execute additional functionality, like logging, outside the main sequence of component calls,

  • Composable: combine Chains with other components, including other Chains.

Class hierarchy:

Chain --> <name>Chain  # Examples: LLMChain, MapReduceChain, RouterChain

Classes¶

chains.api.base.APIChain

Chain that makes API calls and summarizes the responses to answer a question.

chains.api.openapi.chain.OpenAPIEndpointChain

Chain interacts with an OpenAPI endpoint using natural language.

chains.api.openapi.requests_chain.APIRequesterChain

Get the request parser.

chains.api.openapi.requests_chain.APIRequesterOutputParser

Parse the request and error tags.

chains.api.openapi.response_chain.APIResponderChain

Get the response parser.

chains.api.openapi.response_chain.APIResponderOutputParser

Parse the response and error tags.

chains.base.Chain

Abstract base class for creating structured sequences of calls to components.

chains.combine_documents.base.AnalyzeDocumentChain

Chain that splits documents, then analyzes it in pieces.

chains.combine_documents.base.BaseCombineDocumentsChain

Base interface for chains combining documents.

chains.combine_documents.map_reduce.MapReduceDocumentsChain

Combining documents by mapping a chain over them, then combining results.

chains.combine_documents.map_rerank.MapRerankDocumentsChain

Combining documents by mapping a chain over them, then reranking results.

chains.combine_documents.reduce.AsyncCombineDocsProtocol(...)

Interface for the combine_docs method.

chains.combine_documents.reduce.CombineDocsProtocol(...)

Interface for the combine_docs method.

chains.combine_documents.reduce.ReduceDocumentsChain

Combine documents by recursively reducing them.

chains.combine_documents.refine.RefineDocumentsChain

Combine documents by doing a first pass and then refining on more documents.

chains.combine_documents.stuff.StuffDocumentsChain

Chain that combines documents by stuffing into context.

chains.constitutional_ai.base.ConstitutionalChain

Chain for applying constitutional principles.

chains.constitutional_ai.models.ConstitutionalPrinciple

Class for a constitutional principle.

chains.conversation.base.ConversationChain

Chain to have a conversation and load context from memory.

chains.conversational_retrieval.base.BaseConversationalRetrievalChain

Chain for chatting with an index.

chains.conversational_retrieval.base.ChatVectorDBChain

Chain for chatting with a vector database.

chains.conversational_retrieval.base.ConversationalRetrievalChain

Chain for having a conversation based on retrieved documents.

chains.conversational_retrieval.base.InputType

Create a new model by parsing and validating input data from keyword arguments.

chains.elasticsearch_database.base.ElasticsearchDatabaseChain

Chain for interacting with Elasticsearch Database.

chains.flare.base.FlareChain

Chain that combines a retriever, a question generator, and a response generator.

chains.flare.base.QuestionGeneratorChain

Chain that generates questions from uncertain spans.

chains.flare.prompts.FinishedOutputParser

Output parser that checks if the output is finished.

chains.graph_qa.arangodb.ArangoGraphQAChain

Chain for question-answering against a graph by generating AQL statements.

chains.graph_qa.base.GraphQAChain

Chain for question-answering against a graph.

chains.graph_qa.cypher.GraphCypherQAChain

Chain for question-answering against a graph by generating Cypher statements.

chains.graph_qa.cypher_utils.CypherQueryCorrector(schemas)

Used to correct relationship direction in generated Cypher statements.

chains.graph_qa.cypher_utils.Schema(...)

Create new instance of Schema(left_node, relation, right_node)

chains.graph_qa.falkordb.FalkorDBQAChain

Chain for question-answering against a graph by generating Cypher statements.

chains.graph_qa.hugegraph.HugeGraphQAChain

Chain for question-answering against a graph by generating gremlin statements.

chains.graph_qa.kuzu.KuzuQAChain

Question-answering against a graph by generating Cypher statements for Kùzu.

chains.graph_qa.nebulagraph.NebulaGraphQAChain

Chain for question-answering against a graph by generating nGQL statements.

chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain

Chain for question-answering against a Neptune graph by generating openCypher statements.

chains.graph_qa.sparql.GraphSparqlQAChain

Question-answering against an RDF or OWL graph by generating SPARQL statements.

chains.hyde.base.HypotheticalDocumentEmbedder

Generate hypothetical document for query, and then embed that.

chains.llm.LLMChain

Chain to run queries against LLMs.

chains.llm_checker.base.LLMCheckerChain

Chain for question-answering with self-verification.

chains.llm_math.base.LLMMathChain

Chain that interprets a prompt and executes python code to do math.

chains.llm_requests.LLMRequestsChain

Chain that requests a URL and then uses an LLM to parse results.

chains.llm_summarization_checker.base.LLMSummarizationCheckerChain

Chain for question-answering with self-verification.

chains.mapreduce.MapReduceChain

Map-reduce chain.

chains.moderation.OpenAIModerationChain

Pass input through a moderation endpoint.

chains.natbot.base.NatBotChain

Implement an LLM driven browser.

chains.natbot.crawler.Crawler()

A crawler for web pages.

chains.natbot.crawler.ElementInViewPort

A typed dictionary containing information about elements in the viewport.

chains.openai_functions.citation_fuzzy_match.FactWithEvidence

Class representing a single statement.

chains.openai_functions.citation_fuzzy_match.QuestionAnswer

A question and its answer as a list of facts each one should have a source.

chains.openai_functions.openapi.SimpleRequestChain

Chain for making a simple request to an API endpoint.

chains.openai_functions.qa_with_structure.AnswerWithSources

An answer to the question, with sources.

chains.prompt_selector.BasePromptSelector

Base class for prompt selectors.

chains.prompt_selector.ConditionalPromptSelector

Prompt collection that goes through conditionals.

chains.qa_generation.base.QAGenerationChain

Base class for question-answer generation chains.

chains.qa_with_sources.base.BaseQAWithSourcesChain

Question answering chain with sources over documents.

chains.qa_with_sources.base.QAWithSourcesChain

Question answering with sources over documents.

chains.qa_with_sources.loading.LoadingCallable(...)

Interface for loading the combine documents chain.

chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain

Question-answering with sources over an index.

chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain

Question-answering with sources over a vector database.

chains.query_constructor.base.StructuredQueryOutputParser

Output parser that parses a structured query.

chains.query_constructor.ir.Comparator(value)

Enumerator of the comparison operators.

chains.query_constructor.ir.Comparison

A comparison to a value.

chains.query_constructor.ir.Expr

Base class for all expressions.

chains.query_constructor.ir.FilterDirective

A filtering expression.

chains.query_constructor.ir.Operation

A logical operation over other directives.

chains.query_constructor.ir.Operator(value)

Enumerator of the operations.

chains.query_constructor.ir.StructuredQuery

A structured query.

chains.query_constructor.ir.Visitor()

Defines interface for IR translation using visitor pattern.

chains.query_constructor.parser.ISO8601Date

A date in ISO 8601 format (YYYY-MM-DD).

chains.query_constructor.schema.AttributeInfo

Information about a data source attribute.

chains.retrieval_qa.base.BaseRetrievalQA

Base class for question-answering chains.

chains.retrieval_qa.base.RetrievalQA

Chain for question-answering against an index.

chains.retrieval_qa.base.VectorDBQA

Chain for question-answering against a vector database.

chains.router.base.MultiRouteChain

Use a single chain to route an input to one of multiple candidate chains.

chains.router.base.Route(destination, ...)

Create new instance of Route(destination, next_inputs)

chains.router.base.RouterChain

Chain that outputs the name of a destination chain and the inputs to it.

chains.router.embedding_router.EmbeddingRouterChain

Chain that uses embeddings to route between options.

chains.router.llm_router.LLMRouterChain

A router chain that uses an LLM chain to perform routing.

chains.router.llm_router.RouterOutputParser

Parser for output of router chain in the multi-prompt chain.

chains.router.multi_prompt.MultiPromptChain

A multi-route chain that uses an LLM router chain to choose amongst prompts.

chains.router.multi_retrieval_qa.MultiRetrievalQAChain

A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.

chains.sequential.SequentialChain

Chain where the outputs of one chain feed directly into next.

chains.sequential.SimpleSequentialChain

Simple chain where the outputs of one step feed directly into next.

chains.sql_database.query.SQLInput

Input for a SQL Chain.

chains.sql_database.query.SQLInputWithTables

Input for a SQL Chain.

chains.transform.TransformChain

Chain that transforms the chain output.

Functions¶

chains.combine_documents.reduce.acollapse_docs(...)

Execute a collapse function on a set of documents and merge their metadatas.

chains.combine_documents.reduce.collapse_docs(...)

Execute a collapse function on a set of documents and merge their metadatas.

chains.combine_documents.reduce.split_list_of_docs(...)

Split Documents into subsets that each meet a cumulative length constraint.

chains.ernie_functions.base.convert_python_function_to_ernie_function(...)

Convert a Python function to an Ernie function-calling API compatible dict.

chains.ernie_functions.base.convert_to_ernie_function(...)

Convert a raw function/class to an Ernie function.

chains.ernie_functions.base.create_ernie_fn_chain(...)

[Legacy] Create an LLM chain that uses Ernie functions.

chains.ernie_functions.base.create_ernie_fn_runnable(...)

Create a runnable sequence that uses Ernie functions.

chains.ernie_functions.base.create_structured_output_chain(...)

[Legacy] Create an LLMChain that uses an Ernie function to get a structured output.

chains.ernie_functions.base.create_structured_output_runnable(...)

Create a runnable that uses an Ernie function to get a structured output.

chains.ernie_functions.base.get_ernie_output_parser(...)

Get the appropriate function output parser given the user functions.

chains.example_generator.generate_example(...)

Return another example given a list of examples for a prompt.

chains.graph_qa.cypher.construct_schema(...)

Filter the schema based on included or excluded types

chains.graph_qa.cypher.extract_cypher(text)

Extract Cypher code from a text.

chains.graph_qa.falkordb.extract_cypher(text)

Extract Cypher code from a text.

chains.graph_qa.neptune_cypher.extract_cypher(text)

Extract Cypher code from text using Regex.

chains.graph_qa.neptune_cypher.trim_query(query)

Trim the query to only include Cypher keywords.

chains.graph_qa.neptune_cypher.use_simple_prompt(llm)

Decides whether to use the simple prompt

chains.loading.load_chain(path, **kwargs)

Unified method for loading a chain from LangChainHub or local fs.

chains.loading.load_chain_from_config(...)

Load chain from Config Dict.

chains.openai_functions.base.convert_python_function_to_openai_function(...)

Convert a Python function to an OpenAI function-calling API compatible dict.

chains.openai_functions.base.convert_to_openai_function(...)

Convert a raw function/class to an OpenAI function.

chains.openai_functions.base.create_openai_fn_chain(...)

[Legacy] Create an LLM chain that uses OpenAI functions.

chains.openai_functions.base.create_openai_fn_runnable(...)

Create a runnable sequence that uses OpenAI functions.

chains.openai_functions.base.create_structured_output_chain(...)

[Legacy] Create an LLMChain that uses an OpenAI function to get a structured output.

chains.openai_functions.base.create_structured_output_runnable(...)

Create a runnable that uses an OpenAI function to get a structured output.

chains.openai_functions.base.get_openai_output_parser(...)

Get the appropriate function output parser given the user functions.

chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)

Create a citation fuzzy match chain.

chains.openai_functions.extraction.create_extraction_chain(...)

Creates a chain that extracts information from a passage.

chains.openai_functions.extraction.create_extraction_chain_pydantic(...)

Creates a chain that extracts information from a passage using pydantic schema.

chains.openai_functions.openapi.get_openapi_chain(spec)

Create a chain for querying an API from a OpenAPI spec.

chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)

Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI

chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm)

Create a question answering chain that returns an answer with sources.

chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...)

Create a question answering chain that returns an answer with sources

chains.openai_functions.tagging.create_tagging_chain(...)

Creates a chain that extracts information from a passage

chains.openai_functions.tagging.create_tagging_chain_pydantic(...)

Creates a chain that extracts information from a passage

chains.openai_functions.utils.get_llm_kwargs(...)

Returns the kwargs for the LLMChain constructor.

chains.openai_tools.extraction.create_extraction_chain_pydantic(...)

chains.prompt_selector.is_chat_model(llm)

Check if the language model is a chat model.

chains.prompt_selector.is_llm(llm)

Check if the language model is a LLM.

chains.qa_with_sources.loading.load_qa_with_sources_chain(llm)

Load a question answering with sources chain.

chains.query_constructor.base.construct_examples(...)

Construct examples from input-output pairs.

chains.query_constructor.base.fix_filter_directive(...)

Fix invalid filter directive.

chains.query_constructor.base.get_query_constructor_prompt(...)

Create query construction prompt.

chains.query_constructor.base.load_query_constructor_chain(...)

Load a query constructor chain.

chains.query_constructor.base.load_query_constructor_runnable(...)

Load a query constructor runnable chain.

chains.query_constructor.parser.get_parser([...])

Returns a parser for the query language.

chains.query_constructor.parser.v_args(...)

Dummy decorator for when lark is not installed.

chains.sql_database.query.create_sql_query_chain(llm, db)

Create a chain that generates SQL queries.

langchain.embeddings¶

Embedding models are wrappers around embedding models from different APIs and services.

Embedding models can be LLMs or not.

Class hierarchy:

Embeddings --> <name>Embeddings  # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings

Classes¶

embeddings.cache.CacheBackedEmbeddings(...)

Interface for caching results from embedding models.

Functions¶

langchain.evaluation¶

Evaluation chains for grading LLM and Chain outputs.

This module contains off-the-shelf evaluation chains for grading the output of LangChain primitives such as language models and chains.

Loading an evaluator

To load an evaluator, you can use the load_evaluators or load_evaluator functions with the names of the evaluators to load.

from langchain.evaluation import load_evaluator

evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
    prediction="We sold more than 40,000 units last week",
    input="How many units did we sell last week?",
    reference="We sold 32,378 units",
)

The evaluator must be one of EvaluatorType.

Datasets

To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the name of the dataset to load.

from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")

Some common use cases for evaluation include:

Low-level API

These evaluators implement one of the following interfaces:

  • StringEvaluator: Evaluate a prediction string against a reference label and/or input context.

  • PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.

  • AgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent.

These interfaces enable easier composability and usage within a higher level evaluation framework.

Classes¶

evaluation.agents.trajectory_eval_chain.TrajectoryEval

A named tuple containing the score and reasoning for a trajectory.

evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain

A chain for evaluating ReAct style agents.

evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser

Trajectory output parser.

evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain

A chain for comparing two outputs, such as the outputs

evaluation.comparison.eval_chain.PairwiseStringEvalChain

A chain for comparing two outputs, such as the outputs

evaluation.comparison.eval_chain.PairwiseStringResultOutputParser

A parser for the output of the PairwiseStringEvalChain.

evaluation.criteria.eval_chain.Criteria(value)

A Criteria to evaluate.

evaluation.criteria.eval_chain.CriteriaEvalChain

LLM Chain for evaluating runs against criteria.

evaluation.criteria.eval_chain.CriteriaResultOutputParser

A parser for the output of the CriteriaEvalChain.

evaluation.criteria.eval_chain.LabeledCriteriaEvalChain

Criteria evaluation chain that requires references.

evaluation.embedding_distance.base.EmbeddingDistance(value)

Embedding Distance Metric.

evaluation.embedding_distance.base.EmbeddingDistanceEvalChain

Use embedding distances to score semantic difference between a prediction and reference.

evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain

Use embedding distances to score semantic difference between two predictions.

evaluation.exact_match.base.ExactMatchStringEvaluator(*)

Compute an exact match between the prediction and the reference.

evaluation.parsing.base.JsonEqualityEvaluator([...])

Evaluates whether the prediction is equal to the reference after

evaluation.parsing.base.JsonValidityEvaluator(...)

Evaluates whether the prediction is valid JSON.

evaluation.parsing.json_distance.JsonEditDistanceEvaluator([...])

An evaluator that calculates the edit distance between JSON strings.

evaluation.parsing.json_schema.JsonSchemaEvaluator(...)

An evaluator that validates a JSON prediction against a JSON schema reference.

evaluation.qa.eval_chain.ContextQAEvalChain

LLM Chain for evaluating QA w/o GT based on context

evaluation.qa.eval_chain.CotQAEvalChain

LLM Chain for evaluating QA using chain of thought reasoning.

evaluation.qa.eval_chain.QAEvalChain

LLM Chain for evaluating question answering.

evaluation.qa.generate_chain.QAGenerateChain

LLM Chain for generating examples for question answering.

evaluation.regex_match.base.RegexMatchStringEvaluator(*)

Compute a regex match between the prediction and the reference.

evaluation.schema.AgentTrajectoryEvaluator()

Interface for evaluating agent trajectories.

evaluation.schema.EvaluatorType(value[, ...])

The types of the evaluators.

evaluation.schema.LLMEvalChain

A base class for evaluators that use an LLM.

evaluation.schema.PairwiseStringEvaluator()

Compare the output of two models (or two outputs of the same model).

evaluation.schema.StringEvaluator()

Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.

evaluation.scoring.eval_chain.LabeledScoreStringEvalChain

A chain for scoring the output of a model on a scale of 1-10.

evaluation.scoring.eval_chain.ScoreStringEvalChain

A chain for scoring on a scale of 1-10 the output of a model.

evaluation.scoring.eval_chain.ScoreStringResultOutputParser

A parser for the output of the ScoreStringEvalChain.

evaluation.string_distance.base.PairwiseStringDistanceEvalChain

Compute string edit distances between two predictions.

evaluation.string_distance.base.StringDistance(value)

Distance metric to use.

evaluation.string_distance.base.StringDistanceEvalChain

Compute string distances between the prediction and the reference.

Functions¶

evaluation.comparison.eval_chain.resolve_pairwise_criteria(...)

Resolve the criteria for the pairwise evaluator.

evaluation.criteria.eval_chain.resolve_criteria(...)

Resolve the criteria to evaluate.

evaluation.loading.load_dataset(uri)

Load a dataset from the LangChainDatasets on HuggingFace.

evaluation.loading.load_evaluator(evaluator, *)

Load the requested evaluation chain specified by a string.

evaluation.loading.load_evaluators(evaluators, *)

Load evaluators specified by a list of evaluator types.

evaluation.scoring.eval_chain.resolve_criteria(...)

Resolve the criteria for the pairwise evaluator.

langchain.hub¶

Interface with the LangChain Hub.

Functions¶

hub.pull(owner_repo_commit, *[, api_url, ...])

Pulls an object from the hub and returns it as a LangChain object.

hub.push(repo_full_name, object, *[, ...])

Pushes an object to the hub and returns the URL it can be viewed at in a browser.

langchain.indexes¶

Code to support various indexing workflows.

Provides code to:

  • Create knowledge graphs from data.

  • Support indexing workflows from LangChain data loaders to vectorstores.

For indexing workflows, this code is used to avoid writing duplicated content into the vectostore and to avoid over-writing content if it’s unchanged.

Importantly, this keeps on working even if the content being written is derived via a set of transformations from some source content (e.g., indexing children documents that were derived from parent documents by chunking.)

Classes¶

indexes.base.RecordManager(namespace)

An abstract base class representing the interface for a record manager.

indexes.graph.GraphIndexCreator

Functionality to create graph index.

indexes.vectorstore.VectorStoreIndexWrapper

Wrapper around a vectorstore for easy access.

indexes.vectorstore.VectorstoreIndexCreator

Logic for creating indexes.

Functions¶

langchain.memory¶

Memory maintains Chain state, incorporating context from past runs.

Class hierarchy for Memory:

BaseMemory --> BaseChatMemory --> <name>Memory  # Examples: ZepMemory, MotorheadMemory

Main helpers:

BaseChatMessageHistory

Chat Message History stores the chat message history in different stores.

Class hierarchy for ChatMessageHistory:

BaseChatMessageHistory --> <name>ChatMessageHistory  # Example: ZepChatMessageHistory

Main helpers:

AIMessage, BaseMessage, HumanMessage

Classes¶

memory.buffer.ConversationBufferMemory

Buffer for storing conversation memory.

memory.buffer.ConversationStringBufferMemory

Buffer for storing conversation memory.

memory.buffer_window.ConversationBufferWindowMemory

Buffer for storing conversation memory inside a limited size window.

memory.chat_memory.BaseChatMemory

Abstract base class for chat memory.

memory.combined.CombinedMemory

Combining multiple memories' data together.

memory.entity.BaseEntityStore

Abstract base class for Entity store.

memory.entity.ConversationEntityMemory

Entity extractor & summarizer memory.

memory.entity.InMemoryEntityStore

In-memory Entity store.

memory.entity.RedisEntityStore

Redis-backed Entity store.

memory.entity.SQLiteEntityStore

SQLite-backed Entity store

memory.entity.UpstashRedisEntityStore

Upstash Redis backed Entity store.

memory.kg.ConversationKGMemory

Knowledge graph conversation memory.

memory.motorhead_memory.MotorheadMemory

Chat message memory backed by Motorhead service.

memory.readonly.ReadOnlySharedMemory

A memory wrapper that is read-only and cannot be changed.

memory.simple.SimpleMemory

Simple memory for storing context or other information that shouldn't ever change between prompts.

memory.summary.ConversationSummaryMemory

Conversation summarizer to chat memory.

memory.summary.SummarizerMixin

Mixin for summarizer.

memory.summary_buffer.ConversationSummaryBufferMemory

Buffer with summarizer for storing conversation memory.

memory.token_buffer.ConversationTokenBufferMemory

Conversation chat memory with token limit.

memory.vectorstore.VectorStoreRetrieverMemory

VectorStoreRetriever-backed memory.

memory.zep_memory.ZepMemory

Persist your chain history to the Zep MemoryStore.

Functions¶

memory.utils.get_prompt_input_key(inputs, ...)

Get the prompt input key.

langchain.model_laboratory¶

Experiment with different models.

Classes¶

model_laboratory.ModelLaboratory(chains[, names])

Experiment with different models.

langchain.output_parsers¶

OutputParser classes parse the output of an LLM call.

Class hierarchy:

BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser  # ListOutputParser, PydanticOutputParser

Main helpers:

Serializable, Generation, PromptValue

Classes¶

output_parsers.boolean.BooleanOutputParser

Parse the output of an LLM call to a boolean.

output_parsers.combining.CombiningOutputParser

Combine multiple output parsers into one.

output_parsers.datetime.DatetimeOutputParser

Parse the output of an LLM call to a datetime.

output_parsers.enum.EnumOutputParser

Parse an output that is one of a set of values.

output_parsers.ernie_functions.JsonKeyOutputFunctionsParser

Parse an output as the element of the Json object.

output_parsers.ernie_functions.JsonOutputFunctionsParser

Parse an output as the Json object.

output_parsers.ernie_functions.OutputFunctionsParser

Parse an output that is one of sets of values.

output_parsers.ernie_functions.PydanticAttrOutputFunctionsParser

Parse an output as an attribute of a pydantic object.

output_parsers.ernie_functions.PydanticOutputFunctionsParser

Parse an output as a pydantic object.

output_parsers.fix.OutputFixingParser

Wraps a parser and tries to fix parsing errors.

output_parsers.json.SimpleJsonOutputParser

Parse the output of an LLM call to a JSON object.

output_parsers.openai_functions.JsonKeyOutputFunctionsParser

Parse an output as the element of the Json object.

output_parsers.openai_functions.JsonOutputFunctionsParser

Parse an output as the Json object.

output_parsers.openai_functions.OutputFunctionsParser

Parse an output that is one of sets of values.

output_parsers.openai_functions.PydanticAttrOutputFunctionsParser

Parse an output as an attribute of a pydantic object.

output_parsers.openai_functions.PydanticOutputFunctionsParser

Parse an output as a pydantic object.

output_parsers.openai_tools.JsonOutputKeyToolsParser

Parse tools from OpenAI response.

output_parsers.openai_tools.JsonOutputToolsParser

Parse tools from OpenAI response.

output_parsers.openai_tools.PydanticToolsParser

Parse tools from OpenAI response.

output_parsers.pandas_dataframe.PandasDataFrameOutputParser

Parse an output using Pandas DataFrame format.

output_parsers.pydantic.PydanticOutputParser

Parse an output using a pydantic model.

output_parsers.rail_parser.GuardrailsOutputParser

Parse the output of an LLM call using Guardrails.

output_parsers.regex.RegexParser

Parse the output of an LLM call using a regex.

output_parsers.regex_dict.RegexDictParser

Parse the output of an LLM call into a Dictionary using a regex.

output_parsers.retry.RetryOutputParser

Wraps a parser and tries to fix parsing errors.

output_parsers.retry.RetryWithErrorOutputParser

Wraps a parser and tries to fix parsing errors.

output_parsers.structured.ResponseSchema

A schema for a response from a structured output parser.

output_parsers.structured.StructuredOutputParser

Parse the output of an LLM call to a structured output.

output_parsers.xml.XMLOutputParser

Parse an output using xml format.

Functions¶

output_parsers.json.parse_and_check_json_markdown(...)

Parse a JSON string from a Markdown string and check that it contains the expected keys.

output_parsers.json.parse_json_markdown(...)

Parse a JSON string from a Markdown string.

output_parsers.json.parse_partial_json(s, *)

Parse a JSON string that may be missing closing braces.

output_parsers.loading.load_output_parser(config)

Load an output parser.

langchain.prompts¶

Prompt is the input to the model.

Prompt is often constructed from multiple components. Prompt classes and functions make constructing

and working with prompts easy.

Class hierarchy:

BasePromptTemplate --> PipelinePromptTemplate
                       StringPromptTemplate --> PromptTemplate
                                                FewShotPromptTemplate
                                                FewShotPromptWithTemplates
                       BaseChatPromptTemplate --> AutoGPTPrompt
                                                  ChatPromptTemplate --> AgentScratchPadChatPromptTemplate



BaseMessagePromptTemplate --> MessagesPlaceholder
                              BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate
                                                                  HumanMessagePromptTemplate
                                                                  AIMessagePromptTemplate
                                                                  SystemMessagePromptTemplate

PromptValue --> StringPromptValue
                ChatPromptValue

Classes¶

prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector

Select and order examples based on ngram overlap score (sentence_bleu score).

Functions¶

prompts.example_selector.ngram_overlap.ngram_overlap_score(...)

Compute ngram overlap score of source and example as sentence_bleu score.

langchain.retrievers¶

Retriever class returns Documents given a text query.

It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.

Class hierarchy:

BaseRetriever --> <name>Retriever  # Examples: ArxivRetriever, MergerRetriever

Main helpers:

Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun

Classes¶

retrievers.contextual_compression.ContextualCompressionRetriever

Retriever that wraps a base retriever and compresses the results.

retrievers.document_compressors.base.BaseDocumentCompressor

Base class for document compressors.

retrievers.document_compressors.base.DocumentCompressorPipeline

Document compressor that uses a pipeline of Transformers.

retrievers.document_compressors.chain_extract.LLMChainExtractor

Document compressor that uses an LLM chain to extract the relevant parts of documents.

retrievers.document_compressors.chain_extract.NoOutputParser

Parse outputs that could return a null string of some sort.

retrievers.document_compressors.chain_filter.LLMChainFilter

Filter that drops documents that aren't relevant to the query.

retrievers.document_compressors.cohere_rerank.CohereRerank

Document compressor that uses Cohere Rerank API.

retrievers.document_compressors.embeddings_filter.EmbeddingsFilter

Document compressor that uses embeddings to drop documents unrelated to the query.

retrievers.ensemble.EnsembleRetriever

Retriever that ensembles the multiple retrievers.

retrievers.merger_retriever.MergerRetriever

Retriever that merges the results of multiple retrievers.

retrievers.multi_query.LineList

List of lines.

retrievers.multi_query.LineListOutputParser

Output parser for a list of lines.

retrievers.multi_query.MultiQueryRetriever

Given a query, use an LLM to write a set of queries.

retrievers.multi_vector.MultiVectorRetriever

Retrieve from a set of multiple embeddings for the same document.

retrievers.multi_vector.SearchType(value[, ...])

Enumerator of the types of search to perform.

retrievers.parent_document_retriever.ParentDocumentRetriever

Retrieve small chunks then retrieve their parent documents.

retrievers.re_phraser.RePhraseQueryRetriever

Given a query, use an LLM to re-phrase it.

retrievers.self_query.base.SelfQueryRetriever

Retriever that uses a vector store and an LLM to generate the vector store queries.

retrievers.self_query.chroma.ChromaTranslator()

Translate Chroma internal query language elements to valid filters.

retrievers.self_query.dashvector.DashvectorTranslator()

Logic for converting internal query language elements to valid filters.

retrievers.self_query.deeplake.DeepLakeTranslator()

Translate DeepLake internal query language elements to valid filters.

retrievers.self_query.elasticsearch.ElasticsearchTranslator()

Translate Elasticsearch internal query language elements to valid filters.

retrievers.self_query.milvus.MilvusTranslator()

Translate Milvus internal query language elements to valid filters.

retrievers.self_query.mongodb_atlas.MongoDBAtlasTranslator()

Translate Mongo internal query language elements to valid filters.

retrievers.self_query.myscale.MyScaleTranslator([...])

Translate MyScale internal query language elements to valid filters.

retrievers.self_query.opensearch.OpenSearchTranslator()

Translate OpenSearch internal query domain-specific language elements to valid filters.

retrievers.self_query.pinecone.PineconeTranslator()

Translate Pinecone internal query language elements to valid filters.

retrievers.self_query.qdrant.QdrantTranslator(...)

Translate Qdrant internal query language elements to valid filters.

retrievers.self_query.redis.RedisTranslator(schema)

Translate

retrievers.self_query.supabase.SupabaseVectorTranslator()

Translate Langchain filters to Supabase PostgREST filters.

retrievers.self_query.timescalevector.TimescaleVectorTranslator()

Translate the internal query language elements to valid filters.

retrievers.self_query.vectara.VectaraTranslator()

Translate Vectara internal query language elements to valid filters.

retrievers.self_query.weaviate.WeaviateTranslator()

Translate Weaviate internal query language elements to valid filters.

retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever

Retriever that combines embedding similarity with recency in retrieving values.

retrievers.web_research.LineList

List of questions.

retrievers.web_research.QuestionListOutputParser

Output parser for a list of numbered questions.

retrievers.web_research.SearchQueries

Search queries to research for the user's goal.

retrievers.web_research.WebResearchRetriever

Google Search API retriever.

Functions¶

retrievers.document_compressors.chain_extract.default_get_input(...)

Return the compression chain input.

retrievers.document_compressors.chain_filter.default_get_input(...)

Return the compression chain input.

retrievers.self_query.deeplake.can_cast_to_float(string)

Check if a string can be cast to a float.

retrievers.self_query.milvus.process_value(value)

Convert a value to a string and add double quotes if it is a string.

retrievers.self_query.vectara.process_value(value)

Convert a value to a string and add single quotes if it is a string.

langchain.runnables¶

Classes¶

runnables.hub.HubRunnable

An instance of a runnable stored in the LangChain Hub.

runnables.openai_functions.OpenAIFunction

A function description for ChatOpenAI

runnables.openai_functions.OpenAIFunctionsRouter

A runnable that routes to the selected function.

langchain.smith¶

LangSmith utilities.

This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation.

Evaluation

LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators. An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>:

from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset

# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
    llm = ChatOpenAI(temperature=0)
    chain = LLMChain.from_string(
        llm,
        "What's the answer to {your_input_key}"
    )
    return chain

# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
    evaluators=[
        "qa",  # "Correctness" against a reference answer
        "embedding_distance",
        RunEvalConfig.Criteria("helpfulness"),
        RunEvalConfig.Criteria({
            "fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
        }),
    ]
)

client = Client()
run_on_dataset(
    client,
    "<my_dataset_name>",
    construct_chain,
    evaluation=evaluation_config,
)

You can also create custom evaluators by subclassing the StringEvaluator or LangSmith’s RunEvaluator classes.

from typing import Optional
from langchain.evaluation import StringEvaluator

class MyStringEvaluator(StringEvaluator):

    @property
    def requires_input(self) -> bool:
        return False

    @property
    def requires_reference(self) -> bool:
        return True

    @property
    def evaluation_name(self) -> str:
        return "exact_match"

    def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
        return {"score": prediction == reference}


evaluation_config = RunEvalConfig(
    custom_evaluators = [MyStringEvaluator()],
)

run_on_dataset(
    client,
    "<my_dataset_name>",
    construct_chain,
    evaluation=evaluation_config,
)

Primary Functions

  • arun_on_dataset: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset.

  • run_on_dataset: Function to evaluate a chain, agent, or other LangChain component over a dataset.

  • RunEvalConfig: Class representing the configuration for running evaluation. You can select evaluators by EvaluatorType or config, or you can pass in custom_evaluators

Classes¶

smith.evaluation.config.EvalConfig

Configuration for a given run evaluator.

smith.evaluation.config.RunEvalConfig

Configuration for a run evaluation.

smith.evaluation.config.SingleKeyEvalConfig

Create a new model by parsing and validating input data from keyword arguments.

smith.evaluation.progress.ProgressBarCallback(total)

A simple progress bar for the console.

smith.evaluation.runner_utils.EvalError(...)

Your architecture raised an error.

smith.evaluation.runner_utils.InputFormatError

Raised when the input format is invalid.

smith.evaluation.runner_utils.TestResult

A dictionary of the results of a single test run.

smith.evaluation.string_run_evaluator.ChainStringRunMapper

Extract items to evaluate from the run object from a chain.

smith.evaluation.string_run_evaluator.LLMStringRunMapper

Extract items to evaluate from the run object.

smith.evaluation.string_run_evaluator.StringExampleMapper

Map an example, or row in the dataset, to the inputs of an evaluation.

smith.evaluation.string_run_evaluator.StringRunEvaluatorChain

Evaluate Run and optional examples.

smith.evaluation.string_run_evaluator.StringRunMapper

Extract items to evaluate from the run object.

smith.evaluation.string_run_evaluator.ToolStringRunMapper

Map an input to the tool.

Functions¶

smith.evaluation.name_generation.random_name()

Generate a random name.

smith.evaluation.runner_utils.arun_on_dataset(...)

Run the Chain or language model on a dataset and store traces to the specified project name.

smith.evaluation.runner_utils.run_on_dataset(...)

Run the Chain or language model on a dataset and store traces to the specified project name.

langchain.storage¶

Implementations of key-value stores and storage helpers.

Module provides implementations of various key-value stores that conform to a simple key-value interface.

The primary goal of these storages is to support implementation of caching.

Classes¶

storage.encoder_backed.EncoderBackedStore(...)

Wraps a store with key and value encoders/decoders.

storage.file_system.LocalFileStore(root_path)

BaseStore interface that works on the local file system.

storage.in_memory.InMemoryBaseStore()

In-memory implementation of the BaseStore using a dictionary.

langchain.text_splitter¶

Text Splitters are classes for splitting text.

Class hierarchy:

BaseDocumentTransformer --> TextSplitter --> <name>TextSplitter  # Example: CharacterTextSplitter
                                             RecursiveCharacterTextSplitter -->  <name>TextSplitter

Note: MarkdownHeaderTextSplitter and **HTMLHeaderTextSplitter do not derive from TextSplitter.

Main helpers:

Document, Tokenizer, Language, LineType, HeaderType

Classes¶

text_splitter.CharacterTextSplitter([...])

Splitting text that looks at characters.

text_splitter.ElementType

Element type as typed dict.

text_splitter.HTMLHeaderTextSplitter(...[, ...])

Splitting HTML files based on specified headers.

text_splitter.HeaderType

Header type as typed dict.

text_splitter.Language(value[, names, ...])

Enum of the programming languages.

text_splitter.LatexTextSplitter(**kwargs)

Attempts to split the text along Latex-formatted layout elements.

text_splitter.LineType

Line type as typed dict.

text_splitter.MarkdownHeaderTextSplitter(...)

Splitting markdown files based on specified headers.

text_splitter.MarkdownTextSplitter(**kwargs)

Attempts to split the text along Markdown-formatted headings.

text_splitter.NLTKTextSplitter([separator, ...])

Splitting text using NLTK package.

text_splitter.PythonCodeTextSplitter(**kwargs)

Attempts to split the text along Python syntax.

text_splitter.RecursiveCharacterTextSplitter([...])

Splitting text by recursively look at characters.

text_splitter.SentenceTransformersTokenTextSplitter([...])

Splitting text to tokens using sentence model tokenizer.

text_splitter.SpacyTextSplitter([separator, ...])

Splitting text using Spacy package.

text_splitter.TextSplitter(chunk_size, ...)

Interface for splitting text into chunks.

text_splitter.TokenTextSplitter([...])

Splitting text to tokens using model tokenizer.

text_splitter.Tokenizer(chunk_overlap, ...)

Tokenizer data class.

Functions¶

text_splitter.split_text_on_tokens(*, text, ...)

Split incoming text and return chunks using tokenizer.

langchain.tools¶

Tools are classes that an Agent uses to interact with the world.

Each tool has a description. Agent uses the description to choose the right tool for the job.

Class hierarchy:

ToolMetaclass --> BaseTool --> <name>Tool  # Examples: AIPluginTool, BaseGraphQLTool
                               <name>      # Examples: BraveSearch, HumanInputRun

Main helpers:

CallbackManagerForToolRun, AsyncCallbackManagerForToolRun

Classes¶

tools.retriever.RetrieverInput

Create a new model by parsing and validating input data from keyword arguments.

Functions¶

tools.render.render_text_description(tools)

Render the tool name and description in plain text.

tools.render.render_text_description_and_args(tools)

Render the tool name, description, and args in plain text.

tools.retriever.create_retriever_tool(...)

Create a tool to do retrieval of documents.

langchain.utils¶

Utility functions for LangChain.

These functions do not depend on any other LangChain module.

Classes¶

utils.ernie_functions.FunctionDescription

Representation of a callable function to the Ernie API.

utils.ernie_functions.ToolDescription

Representation of a callable function to the Ernie API.

Functions¶

utils.ernie_functions.convert_pydantic_to_ernie_function(...)

Converts a Pydantic model to a function description for the Ernie API.

utils.ernie_functions.convert_pydantic_to_ernie_tool(...)

Converts a Pydantic model to a function description for the Ernie API.