langchain.agents.json_chat.base.create_json_chat_agent¶

langchain.agents.json_chat.base.create_json_chat_agent(llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate) Runnable[source]¶

Create an agent that uses JSON to format its logic, build for Chat Models.

Examples

from langchain import hub
from langchain_community.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor, create_json_chat_agent

prompt = hub.pull("hwchase17/react-chat-json")
model = ChatOpenAI()
tools = ...

agent = create_json_chat_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

agent_executor.invoke({"input": "hi"})

# Using with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
    {
        "input": "what's my name?",
        "chat_history": [
            HumanMessage(content="hi! my name is bob"),
            AIMessage(content="Hello Bob! How can I assist you today?"),
        ],
    }
)
Parameters
  • llm – LLM to use as the agent.

  • tools – Tools this agent has access to.

  • prompt – The prompt to use, must have input keys of tools, tool_names, and agent_scratchpad.

Returns

A runnable sequence representing an agent. It takes as input all the same input variables as the prompt passed in does. It returns as output either an AgentAction or AgentFinish.