langchain_community.callbacks.aim_callback.AimCallbackHandler¶

class langchain_community.callbacks.aim_callback.AimCallbackHandler(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True)[source]¶

Callback Handler that logs to Aim.

Parameters
  • repo (str, optional) – Aim repository path or Repo object to which Run object is bound. If skipped, default Repo is used.

  • experiment_name (str, optional) – Sets Run’s experiment property. ‘default’ if not specified. Can be used later to query runs/sequences.

  • system_tracking_interval (int, optional) –

    Sets the tracking interval in seconds for system usage metrics (CPU, Memory, etc.). Set to None

    to disable system metrics tracking.

  • log_system_params (bool, optional) – Enable/Disable logging of system params such as installed packages, git info, environment variables, etc.

This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run and then logs the response to Aim.

Initialize callback handler.

Attributes

always_verbose

Whether to call verbose callbacks even if verbose is False.

ignore_agent

Whether to ignore agent callbacks.

ignore_chain

Whether to ignore chain callbacks.

ignore_chat_model

Whether to ignore chat model callbacks.

ignore_llm

Whether to ignore LLM callbacks.

ignore_retriever

Whether to ignore retriever callbacks.

ignore_retry

Whether to ignore retry callbacks.

raise_error

run_inline

Methods

__init__([repo, experiment_name, ...])

Initialize callback handler.

flush_tracker([repo, experiment_name, ...])

Flush the tracker and reset the session.

get_custom_callback_meta()

on_agent_action(action, **kwargs)

Run on agent action.

on_agent_finish(finish, **kwargs)

Run when agent ends running.

on_chain_end(outputs, **kwargs)

Run when chain ends running.

on_chain_error(error, **kwargs)

Run when chain errors.

on_chain_start(serialized, inputs, **kwargs)

Run when chain starts running.

on_chat_model_start(serialized, messages, *, ...)

Run when a chat model starts running.

on_llm_end(response, **kwargs)

Run when LLM ends running.

on_llm_error(error, **kwargs)

Run when LLM errors.

on_llm_new_token(token, **kwargs)

Run when LLM generates a new token.

on_llm_start(serialized, prompts, **kwargs)

Run when LLM starts.

on_retriever_end(documents, *, run_id[, ...])

Run when Retriever ends running.

on_retriever_error(error, *, run_id[, ...])

Run when Retriever errors.

on_retriever_start(serialized, query, *, run_id)

Run when Retriever starts running.

on_retry(retry_state, *, run_id[, parent_run_id])

Run on a retry event.

on_text(text, **kwargs)

Run when agent is ending.

on_tool_end(output, **kwargs)

Run when tool ends running.

on_tool_error(error, **kwargs)

Run when tool errors.

on_tool_start(serialized, input_str, **kwargs)

Run when tool starts running.

reset_callback_meta()

Reset the callback metadata.

setup(**kwargs)

__init__(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True) None[source]¶

Initialize callback handler.

Parameters
  • repo (Optional[str]) –

  • experiment_name (Optional[str]) –

  • system_tracking_interval (Optional[int]) –

  • log_system_params (bool) –

Return type

None

flush_tracker(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True, langchain_asset: Any = None, reset: bool = True, finish: bool = False) None[source]¶

Flush the tracker and reset the session.

Parameters
  • repo (str, optional) – Aim repository path or Repo object to which Run object is bound. If skipped, default Repo is used.

  • experiment_name (str, optional) – Sets Run’s experiment property. ‘default’ if not specified. Can be used later to query runs/sequences.

  • system_tracking_interval (int, optional) –

    Sets the tracking interval in seconds for system usage metrics (CPU, Memory, etc.). Set to None

    to disable system metrics tracking.

  • log_system_params (bool, optional) – Enable/Disable logging of system params such as installed packages, git info, environment variables, etc.

  • langchain_asset (Any) – The langchain asset to save.

  • reset (bool) – Whether to reset the session.

  • finish (bool) – Whether to finish the run.

  • Returns – None

Return type

None

get_custom_callback_meta() Dict[str, Any]¶
Return type

Dict[str, Any]

on_agent_action(action: AgentAction, **kwargs: Any) Any[source]¶

Run on agent action.

Parameters
Return type

Any

on_agent_finish(finish: AgentFinish, **kwargs: Any) None[source]¶

Run when agent ends running.

Parameters
Return type

None

on_chain_end(outputs: Dict[str, Any], **kwargs: Any) None[source]¶

Run when chain ends running.

Parameters
  • outputs (Dict[str, Any]) –

  • kwargs (Any) –

Return type

None

on_chain_error(error: BaseException, **kwargs: Any) None[source]¶

Run when chain errors.

Parameters
  • error (BaseException) –

  • kwargs (Any) –

Return type

None

on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) None[source]¶

Run when chain starts running.

Parameters
  • serialized (Dict[str, Any]) –

  • inputs (Dict[str, Any]) –

  • kwargs (Any) –

Return type

None

on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any¶

Run when a chat model starts running.

ATTENTION: This method is called for chat models. If you’re implementing

a handler for a non-chat model, you should use on_llm_start instead.

Parameters
  • serialized (Dict[str, Any]) –

  • messages (List[List[BaseMessage]]) –

  • run_id (UUID) –

  • parent_run_id (Optional[UUID]) –

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • kwargs (Any) –

Return type

Any

on_llm_end(response: LLMResult, **kwargs: Any) None[source]¶

Run when LLM ends running.

Parameters
  • response (LLMResult) –

  • kwargs (Any) –

Return type

None

on_llm_error(error: BaseException, **kwargs: Any) None[source]¶

Run when LLM errors.

Parameters
  • error (BaseException) –

  • kwargs (Any) –

Return type

None

on_llm_new_token(token: str, **kwargs: Any) None[source]¶

Run when LLM generates a new token.

Parameters
  • token (str) –

  • kwargs (Any) –

Return type

None

on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) None[source]¶

Run when LLM starts.

Parameters
  • serialized (Dict[str, Any]) –

  • prompts (List[str]) –

  • kwargs (Any) –

Return type

None

on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any¶

Run when Retriever ends running.

Parameters
  • documents (Sequence[Document]) –

  • run_id (UUID) –

  • parent_run_id (Optional[UUID]) –

  • kwargs (Any) –

Return type

Any

on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any¶

Run when Retriever errors.

Parameters
  • error (BaseException) –

  • run_id (UUID) –

  • parent_run_id (Optional[UUID]) –

  • kwargs (Any) –

Return type

Any

on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any¶

Run when Retriever starts running.

Parameters
  • serialized (Dict[str, Any]) –

  • query (str) –

  • run_id (UUID) –

  • parent_run_id (Optional[UUID]) –

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • kwargs (Any) –

Return type

Any

on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any¶

Run on a retry event.

Parameters
  • retry_state (RetryCallState) –

  • run_id (UUID) –

  • parent_run_id (Optional[UUID]) –

  • kwargs (Any) –

Return type

Any

on_text(text: str, **kwargs: Any) None[source]¶

Run when agent is ending.

Parameters
  • text (str) –

  • kwargs (Any) –

Return type

None

on_tool_end(output: Any, **kwargs: Any) None[source]¶

Run when tool ends running.

Parameters
  • output (Any) –

  • kwargs (Any) –

Return type

None

on_tool_error(error: BaseException, **kwargs: Any) None[source]¶

Run when tool errors.

Parameters
  • error (BaseException) –

  • kwargs (Any) –

Return type

None

on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) None[source]¶

Run when tool starts running.

Parameters
  • serialized (Dict[str, Any]) –

  • input_str (str) –

  • kwargs (Any) –

Return type

None

reset_callback_meta() None¶

Reset the callback metadata.

Return type

None

setup(**kwargs: Any) None[source]¶
Parameters

kwargs (Any) –

Return type

None

Examples using AimCallbackHandler¶