langchain.agents.openai_tools.base
.create_openai_tools_agent¶
- langchain.agents.openai_tools.base.create_openai_tools_agent(llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate) Runnable [source]¶
Create an agent that uses OpenAI tools.
- Parameters
llm (BaseLanguageModel) – LLM to use as the agent.
tools (Sequence[BaseTool]) – Tools this agent has access to.
prompt (ChatPromptTemplate) – The prompt to use. See Prompt section below for more on the expected input variables.
- Returns
A Runnable sequence representing an agent. It takes as input all the same input variables as the prompt passed in does. It returns as output either an AgentAction or AgentFinish.
- Return type
Example
from langchain import hub from langchain_community.chat_models import ChatOpenAI from langchain.agents import AgentExecutor, create_openai_tools_agent prompt = hub.pull("hwchase17/openai-tools-agent") model = ChatOpenAI() tools = ... agent = create_openai_tools_agent(model, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools) agent_executor.invoke({"input": "hi"}) # Using with chat history from langchain_core.messages import AIMessage, HumanMessage agent_executor.invoke( { "input": "what's my name?", "chat_history": [ HumanMessage(content="hi! my name is bob"), AIMessage(content="Hello Bob! How can I assist you today?"), ], } )
Prompt:
- The agent prompt must have an agent_scratchpad key that is a
MessagesPlaceholder
. Intermediate agent actions and tool output messages will be passed in here.
Here’s an example:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assistant"), MessagesPlaceholder("chat_history", optional=True), ("human", "{input}"), MessagesPlaceholder("agent_scratchpad"), ] )