langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety¶

class langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶

Methods

__init__(client[, callback, unique_id, chain_id])

validate(prompt_value[, config])

Check and validate the safety of the given prompt text.

__init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) None[source]¶
validate(prompt_value: str, config: Any = None) str[source]¶

Check and validate the safety of the given prompt text.

Parameters
  • prompt_value (str) – The input text to be checked for unsafe text.

  • config (Dict[str, Any]) – Configuration settings for prompt safety checks.

Raises
  • ValueError – If unsafe prompt is found in the prompt text based

  • on the specified threshold. –

Returns

The input prompt_value.

Return type

str

Note

This function checks the safety of the provided prompt text using Comprehend’s classify_document API and raises an error if unsafe text is detected with a score above the specified threshold.

Example

comprehend_client = boto3.client(‘comprehend’) prompt_text = “Please tell me your credit card information.” config = {“threshold”: 0.7} checked_prompt = check_prompt_safety(comprehend_client, prompt_text, config)