langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler

class langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]

Callback Handler that logs to ClearML.

Parameters
  • job_type (str) – The type of clearml task such as “inference”, “testing” or “qc”

  • project_name (str) – The clearml project name

  • tags (list) – Tags to add to the task

  • task_name (str) – Name of the clearml task

  • visualize (bool) – Whether to visualize the run.

  • complexity_metrics (bool) – Whether to log complexity metrics

  • stream_logs (bool) – Whether to stream callback actions to ClearML

This handler will utilize the associated callback method and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to the ClearML console.

Initialize callback handler.

Attributes

always_verbose

Whether to call verbose callbacks even if verbose is False.

ignore_agent

Whether to ignore agent callbacks.

ignore_chain

Whether to ignore chain callbacks.

ignore_chat_model

Whether to ignore chat model callbacks.

ignore_llm

Whether to ignore LLM callbacks.

ignore_retriever

Whether to ignore retriever callbacks.

ignore_retry

Whether to ignore retry callbacks.

raise_error

run_inline

Methods

__init__([task_type, project_name, tags, ...])

Initialize callback handler.

analyze_text(text)

Analyze text using textstat and spacy.

flush_tracker([name, langchain_asset, finish])

Flush the tracker and setup the session.

get_custom_callback_meta()

on_agent_action(action, **kwargs)

Run on agent action.

on_agent_finish(finish, **kwargs)

Run when agent ends running.

on_chain_end(outputs, **kwargs)

Run when chain ends running.

on_chain_error(error, **kwargs)

Run when chain errors.

on_chain_start(serialized, inputs, **kwargs)

Run when chain starts running.

on_chat_model_start(serialized, messages, *, ...)

Run when a chat model starts running.

on_llm_end(response, **kwargs)

Run when LLM ends running.

on_llm_error(error, **kwargs)

Run when LLM errors.

on_llm_new_token(token, **kwargs)

Run when LLM generates a new token.

on_llm_start(serialized, prompts, **kwargs)

Run when LLM starts.

on_retriever_end(documents, *, run_id[, ...])

Run when Retriever ends running.

on_retriever_error(error, *, run_id[, ...])

Run when Retriever errors.

on_retriever_start(serialized, query, *, run_id)

Run when Retriever starts running.

on_retry(retry_state, *, run_id[, parent_run_id])

Run on a retry event.

on_text(text, **kwargs)

Run when agent is ending.

on_tool_end(output, **kwargs)

Run when tool ends running.

on_tool_error(error, **kwargs)

Run when tool errors.

on_tool_start(serialized, input_str, **kwargs)

Run when tool starts running.

reset_callback_meta()

Reset the callback metadata.

__init__(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False) None[source]

Initialize callback handler.

analyze_text(text: str) dict[source]

Analyze text using textstat and spacy.

Parameters

text (str) – The text to analyze.

Returns

A dictionary containing the complexity metrics.

Return type

(dict)

flush_tracker(name: Optional[str] = None, langchain_asset: Any = None, finish: bool = False) None[source]

Flush the tracker and setup the session.

Everything after this will be a new table.

Parameters
  • name – Name of the performed session so far so it is identifiable

  • langchain_asset – The langchain asset to save.

  • finish – Whether to finish the run.

  • Returns – None

get_custom_callback_meta() Dict[str, Any]
on_agent_action(action: AgentAction, **kwargs: Any) Any[source]

Run on agent action.

on_agent_finish(finish: AgentFinish, **kwargs: Any) None[source]

Run when agent ends running.

on_chain_end(outputs: Dict[str, Any], **kwargs: Any) None[source]

Run when chain ends running.

on_chain_error(error: BaseException, **kwargs: Any) None[source]

Run when chain errors.

on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) None[source]

Run when chain starts running.

on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when a chat model starts running.

on_llm_end(response: LLMResult, **kwargs: Any) None[source]

Run when LLM ends running.

on_llm_error(error: BaseException, **kwargs: Any) None[source]

Run when LLM errors.

on_llm_new_token(token: str, **kwargs: Any) None[source]

Run when LLM generates a new token.

on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) None[source]

Run when LLM starts.

on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when Retriever ends running.

on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when Retriever errors.

on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when Retriever starts running.

on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run on a retry event.

on_text(text: str, **kwargs: Any) None[source]

Run when agent is ending.

on_tool_end(output: str, **kwargs: Any) None[source]

Run when tool ends running.

on_tool_error(error: BaseException, **kwargs: Any) None[source]

Run when tool errors.

on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) None[source]

Run when tool starts running.

reset_callback_meta() None

Reset the callback metadata.

Examples using ClearMLCallbackHandler