langchain.agents.mrkl.base.ZeroShotAgent

class langchain.agents.mrkl.base.ZeroShotAgent[source]

Bases: Agent

Agent for the MRKL chain.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param allowed_tools: Optional[List[str]] = None
param llm_chain: langchain.chains.llm.LLMChain [Required]
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish]

Given input, decided what to do.

Parameters
  • intermediate_steps – Steps the LLM has taken to date, along with observations

  • callbacks – Callbacks to run.

  • **kwargs – User inputs.

Returns

Action specifying what tool to use.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) PromptTemplate[source]

Create prompt in the style of the zero shot agent.

Parameters
  • tools – List of tools the agent will have access to, used to format the prompt.

  • prefix – String to put before the list of tools.

  • suffix – String to put after the list of tools.

  • input_variables – List of input variables the final prompt will expect.

Returns

A PromptTemplate with the template assembled from the pieces here.

dict(**kwargs: Any) Dict

Return dictionary representation of agent.

classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) Agent[source]

Construct an agent from an LLM and tools.

classmethod from_orm(obj: Any) Model
get_allowed_tools() Optional[List[str]]
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) Dict[str, Any]

Create the full inputs for the LLMChain from intermediate steps.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish]

Given input, decided what to do.

Parameters
  • intermediate_steps – Steps the LLM has taken to date, along with observations

  • callbacks – Callbacks to run.

  • **kwargs – User inputs.

Returns

Action specifying what tool to use.

return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish

Return response when agent has been stopped due to max iterations.

save(file_path: Union[Path, str]) None

Save the agent.

Parameters

file_path – Path to file to save the agent to.

Example: .. code-block:: python

# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
tool_run_logging_kwargs() Dict
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property llm_prefix: str

Prefix to append the llm call with.

property observation_prefix: str

Prefix to append the observation with.

property return_values: List[str]

Return values of the agent.

Examples using ZeroShotAgent