langchain.agents.xml.base
.create_xml_agent¶
- langchain.agents.xml.base.create_xml_agent(llm: ~langchain_core.language_models.base.BaseLanguageModel, tools: ~typing.Sequence[~langchain_core.tools.BaseTool], prompt: ~langchain_core.prompts.base.BasePromptTemplate, tools_renderer: ~typing.Callable[[~typing.List[~langchain_core.tools.BaseTool]], str] = <function render_text_description>) Runnable [source]¶
Create an agent that uses XML to format its logic.
- Parameters
llm (BaseLanguageModel) – LLM to use as the agent.
tools (Sequence[BaseTool]) – Tools this agent has access to.
prompt (BasePromptTemplate) – The prompt to use, must have input keys tools: contains descriptions for each tool. agent_scratchpad: contains previous agent actions and tool outputs.
tools_renderer (Callable[[List[BaseTool]], str]) – This controls how the tools are converted into a string and then passed into the LLM. Default is render_text_description.
- Returns
A Runnable sequence representing an agent. It takes as input all the same input variables as the prompt passed in does. It returns as output either an AgentAction or AgentFinish.
- Return type
Example
from langchain import hub from langchain_community.chat_models import ChatAnthropic from langchain.agents import AgentExecutor, create_xml_agent prompt = hub.pull("hwchase17/xml-agent-convo") model = ChatAnthropic() tools = ... agent = create_xml_agent(model, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools) agent_executor.invoke({"input": "hi"}) # Use with chat history from langchain_core.messages import AIMessage, HumanMessage agent_executor.invoke( { "input": "what's my name?", # Notice that chat_history is a string # since this prompt is aimed at LLMs, not chat models "chat_history": "Human: My name is Bob\nAI: Hello Bob!", } )
Prompt:
- The prompt must have input keys:
tools: contains descriptions for each tool.
agent_scratchpad: contains previous agent actions and tool outputs as an XML string.
Here’s an example:
from langchain_core.prompts import PromptTemplate template = '''You are a helpful assistant. Help the user answer any questions. You have access to the following tools: {tools} In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation> For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond: <tool>search</tool><tool_input>weather in SF</tool_input> <observation>64 degrees</observation> When you are done, respond with a final answer between <final_answer></final_answer>. For example: <final_answer>The weather in SF is 64 degrees</final_answer> Begin! Previous Conversation: {chat_history} Question: {input} {agent_scratchpad}''' prompt = PromptTemplate.from_template(template)