langchain_experimental.comprehend_moderation.toxicity
.ComprehendToxicity¶
- class langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶
Methods
__init__
(client[, callback, unique_id, chain_id])validate
(prompt_value[, config])Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration.
- __init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) None [source]¶
- validate(prompt_value: str, config: Any = None) str [source]¶
Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration. :param prompt_value: The text content to be checked for toxicity. :type prompt_value: str :param config: Configuration for toxicity checks and actions. :type config: Dict[str, Any]
- Returns
The original prompt_value if allowed or no toxicity found.
- Return type
str
- Raises
ValueError – If the prompt contains toxic labels and cannot be
processed based on the configuration. –