langchain_community.callbacks.comet_ml_callback
.CometCallbackHandler¶
- class langchain_community.callbacks.comet_ml_callback.CometCallbackHandler(task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = None, tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, stream_logs: bool = True)[source]¶
Callback Handler that logs to Comet.
- Parameters
job_type (str) – The type of comet_ml task such as “inference”, “testing” or “qc”
project_name (str) – The comet_ml project name
tags (list) – Tags to add to the task
task_name (str) – Name of the comet_ml task
visualize (bool) – Whether to visualize the run.
complexity_metrics (bool) – Whether to log complexity metrics
stream_logs (bool) – Whether to stream callback actions to Comet
This handler will utilize the associated callback method and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to Comet.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__
([task_type, workspace, ...])Initialize callback handler.
flush_tracker
([langchain_asset, task_type, ...])Flush the tracker and setup the session.
on_agent_action
(action, **kwargs)Run on agent action.
on_agent_finish
(finish, **kwargs)Run when agent ends running.
on_chain_end
(outputs, **kwargs)Run when chain ends running.
on_chain_error
(error, **kwargs)Run when chain errors.
on_chain_start
(serialized, inputs, **kwargs)Run when chain starts running.
on_chat_model_start
(serialized, messages, *, ...)Run when a chat model starts running.
on_llm_end
(response, **kwargs)Run when LLM ends running.
on_llm_error
(error, **kwargs)Run when LLM errors.
on_llm_new_token
(token, **kwargs)Run when LLM generates a new token.
on_llm_start
(serialized, prompts, **kwargs)Run when LLM starts.
on_retriever_end
(documents, *, run_id[, ...])Run when Retriever ends running.
on_retriever_error
(error, *, run_id[, ...])Run when Retriever errors.
on_retriever_start
(serialized, query, *, run_id)Run when Retriever starts running.
on_retry
(retry_state, *, run_id[, parent_run_id])Run on a retry event.
on_text
(text, **kwargs)Run when agent is ending.
on_tool_end
(output, **kwargs)Run when tool ends running.
on_tool_error
(error, **kwargs)Run when tool errors.
on_tool_start
(serialized, input_str, **kwargs)Run when tool starts running.
Reset the callback metadata.
- __init__(task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = None, tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, stream_logs: bool = True) None [source]¶
Initialize callback handler.
- flush_tracker(langchain_asset: Any = None, task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = 'comet-langchain-demo', tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, finish: bool = False, reset: bool = False) None [source]¶
Flush the tracker and setup the session.
Everything after this will be a new table.
- Parameters
name – Name of the performed session so far so it is identifiable
langchain_asset – The langchain asset to save.
finish – Whether to finish the run.
Returns – None
- get_custom_callback_meta() Dict[str, Any] ¶
- on_agent_action(action: AgentAction, **kwargs: Any) Any [source]¶
Run on agent action.
- on_agent_finish(finish: AgentFinish, **kwargs: Any) None [source]¶
Run when agent ends running.
- on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) None [source]¶
Run when chain starts running.
- on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any ¶
Run when a chat model starts running.
- on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) None [source]¶
Run when LLM starts.
- on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any ¶
Run when Retriever ends running.
- on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any ¶
Run when Retriever errors.
- on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any ¶
Run when Retriever starts running.
- on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any ¶
Run on a retry event.
- on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) None [source]¶
Run when tool starts running.
- reset_callback_meta() None ¶
Reset the callback metadata.