langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler

class langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler(logger: Logger, handler: Any)[source]

Callback Handler for logging to WhyLabs. This callback handler utilizes langkit to extract features from the prompts & responses when interacting with an LLM. These features can be used to guardrail, evaluate, and observe interactions over time to detect issues relating to hallucinations, prompt engineering, or output validation. LangKit is an LLM monitoring toolkit developed by WhyLabs.

Here are some examples of what can be monitored with LangKit: * Text Quality

  • readability score

  • complexity and grade scores

  • Text Relevance - Similarity scores between prompt/responses - Similarity scores against user-defined themes - Topic classification

  • Security and Privacy - patterns - count of strings matching a user-defined regex pattern group - jailbreaks - similarity scores with respect to known jailbreak attempts - prompt injection - similarity scores with respect to known prompt attacks - refusals - similarity scores with respect to known LLM refusal responses

  • Sentiment and Toxicity - sentiment analysis - toxicity analysis

For more information, see https://docs.whylabs.ai/docs/language-model-monitoring or check out the LangKit repo here: https://github.com/whylabs/langkit

— :param api_key: WhyLabs API key. Optional because the preferred

way to specify the API key is with environment variable WHYLABS_API_KEY.

Parameters
  • org_id (Optional[str]) – WhyLabs organization id to write profiles to. Optional because the preferred way to specify the organization id is with environment variable WHYLABS_DEFAULT_ORG_ID.

  • dataset_id (Optional[str]) – WhyLabs dataset id to write profiles to. Optional because the preferred way to specify the dataset id is with environment variable WHYLABS_DEFAULT_DATASET_ID.

  • sentiment (bool) – Whether to enable sentiment analysis. Defaults to False.

  • toxicity (bool) – Whether to enable toxicity analysis. Defaults to False.

  • themes (bool) – Whether to enable theme analysis. Defaults to False.

Initiate the rolling logger.

Attributes

ignore_agent

Whether to ignore agent callbacks.

ignore_chain

Whether to ignore chain callbacks.

ignore_chat_model

Whether to ignore chat model callbacks.

ignore_llm

Whether to ignore LLM callbacks.

ignore_retriever

Whether to ignore retriever callbacks.

ignore_retry

Whether to ignore retry callbacks.

raise_error

run_inline

Methods

__init__(logger, handler)

Initiate the rolling logger.

close()

Close any loggers to allow writing out of any profiles before exiting.

flush()

Explicitly write current profile if using a rolling logger.

from_params(*[, api_key, org_id, ...])

Instantiate whylogs Logger from params.

on_agent_action(action, *, run_id[, ...])

Run on agent action.

on_agent_finish(finish, *, run_id[, ...])

Run on agent end.

on_chain_end(outputs, *, run_id[, parent_run_id])

Run when chain ends running.

on_chain_error(error, *, run_id[, parent_run_id])

Run when chain errors.

on_chain_start(serialized, inputs, *, run_id)

Run when chain starts running.

on_chat_model_start(serialized, messages, *, ...)

Run when a chat model starts running.

on_llm_end(response, *, run_id[, parent_run_id])

Run when LLM ends running.

on_llm_error(error, *, run_id[, parent_run_id])

Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.

on_llm_new_token(token, *[, chunk, ...])

Run on new LLM token.

on_llm_start(serialized, prompts, *, run_id)

Run when LLM starts running.

on_retriever_end(documents, *, run_id[, ...])

Run when Retriever ends running.

on_retriever_error(error, *, run_id[, ...])

Run when Retriever errors.

on_retriever_start(serialized, query, *, run_id)

Run when Retriever starts running.

on_retry(retry_state, *, run_id[, parent_run_id])

Run on a retry event.

on_text(text, *, run_id[, parent_run_id])

Run on arbitrary text.

on_tool_end(output, *, run_id[, parent_run_id])

Run when tool ends running.

on_tool_error(error, *, run_id[, parent_run_id])

Run when tool errors.

on_tool_start(serialized, input_str, *, run_id)

Run when tool starts running.

__init__(logger: Logger, handler: Any)[source]

Initiate the rolling logger.

close() None[source]

Close any loggers to allow writing out of any profiles before exiting.

flush() None[source]

Explicitly write current profile if using a rolling logger.

classmethod from_params(*, api_key: Optional[str] = None, org_id: Optional[str] = None, dataset_id: Optional[str] = None, sentiment: bool = False, toxicity: bool = False, themes: bool = False, logger: Optional[Logger] = None) WhyLabsCallbackHandler[source]

Instantiate whylogs Logger from params.

Parameters
  • api_key (Optional[str]) – WhyLabs API key. Optional because the preferred way to specify the API key is with environment variable WHYLABS_API_KEY.

  • org_id (Optional[str]) – WhyLabs organization id to write profiles to. If not set must be specified in environment variable WHYLABS_DEFAULT_ORG_ID.

  • dataset_id (Optional[str]) – The model or dataset this callback is gathering telemetry for. If not set must be specified in environment variable WHYLABS_DEFAULT_DATASET_ID.

  • sentiment (bool) – If True will initialize a model to perform sentiment analysis compound score. Defaults to False and will not gather this metric.

  • toxicity (bool) – If True will initialize a model to score toxicity. Defaults to False and will not gather this metric.

  • themes (bool) – If True will initialize a model to calculate distance to configured themes. Defaults to None and will not gather this metric.

  • logger (Optional[Logger]) – If specified will bind the configured logger as the telemetry gathering agent. Defaults to LangKit schema with periodic WhyLabs writer.

on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run on agent action.

on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run on agent end.

on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when chain ends running.

on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when chain errors.

on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when chain starts running.

on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when a chat model starts running.

on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when LLM ends running.

on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments.

  • response (LLMResult): The response which was generated before

    the error occurred.

on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run on new LLM token. Only available when streaming is enabled.

Parameters
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when LLM starts running.

on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when Retriever ends running.

on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when Retriever errors.

on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when Retriever starts running.

on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run on a retry event.

on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run on arbitrary text.

on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when tool ends running.

on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any

Run when tool errors.

on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any

Run when tool starts running.

Examples using WhyLabsCallbackHandler