langchain.agents.structured_chat.base
.create_structured_chat_agent¶
- langchain.agents.structured_chat.base.create_structured_chat_agent(llm: ~langchain_core.language_models.base.BaseLanguageModel, tools: ~typing.Sequence[~langchain_core.tools.BaseTool], prompt: ~langchain_core.prompts.chat.ChatPromptTemplate, tools_renderer: ~typing.Callable[[~typing.List[~langchain_core.tools.BaseTool]], str] = <function render_text_description_and_args>) Runnable [source]¶
Create an agent aimed at supporting tools with multiple inputs.
- Parameters
llm (BaseLanguageModel) – LLM to use as the agent.
tools (Sequence[BaseTool]) – Tools this agent has access to.
prompt (ChatPromptTemplate) – The prompt to use. See Prompt section below for more.
tools_renderer (Callable[[List[BaseTool]], str]) – This controls how the tools are converted into a string and then passed into the LLM. Default is render_text_description.
- Returns
A Runnable sequence representing an agent. It takes as input all the same input variables as the prompt passed in does. It returns as output either an AgentAction or AgentFinish.
- Return type
Examples
from langchain import hub from langchain_community.chat_models import ChatOpenAI from langchain.agents import AgentExecutor, create_structured_chat_agent prompt = hub.pull("hwchase17/structured-chat-agent") model = ChatOpenAI() tools = ... agent = create_structured_chat_agent(model, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools) agent_executor.invoke({"input": "hi"}) # Using with chat history from langchain_core.messages import AIMessage, HumanMessage agent_executor.invoke( { "input": "what's my name?", "chat_history": [ HumanMessage(content="hi! my name is bob"), AIMessage(content="Hello Bob! How can I assist you today?"), ], } )
Prompt:
- The prompt must have input keys:
tools: contains descriptions and arguments for each tool.
tool_names: contains all tool names.
agent_scratchpad: contains previous agent actions and tool outputs as a string.
Here’s an example:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder system = '''Respond to the human as helpfully and accurately as possible. You have access to the following tools: {tools} Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input). Valid "action" values: "Final Answer" or {tool_names} Provide only ONE action per $JSON_BLOB, as shown: ``` {{ "action": $TOOL_NAME, "action_input": $INPUT }} ``` Follow this format: Question: input question to answer Thought: consider previous and subsequent steps Action: ``` $JSON_BLOB ``` Observation: action result ... (repeat Thought/Action/Observation N times) Thought: I know what to respond Action: ``` {{ "action": "Final Answer", "action_input": "Final response to human" }} Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation''' human = '''{input} {agent_scratchpad} (reminder to respond in a JSON blob no matter what)''' prompt = ChatPromptTemplate.from_messages( [ ("system", system), MessagesPlaceholder("chat_history", optional=True), ("human", human), ] )