langchain_experimental.comprehend_moderation.prompt_safety
.ComprehendPromptSafety¶
- class langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶
Methods
__init__
(client[, callback, unique_id, chain_id])validate
(prompt_value[, config])Check and validate the safety of the given prompt text.
- __init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) None [source]¶
- validate(prompt_value: str, config: Any = None) str [source]¶
Check and validate the safety of the given prompt text.
- Parameters
prompt_value (str) â The input text to be checked for unsafe text.
config (Dict[str, Any]) â Configuration settings for prompt safety checks.
- Raises
ValueError â If unsafe prompt is found in the prompt text based
on the specified threshold. â
- Returns
The input prompt_value.
- Return type
str
Note
This function checks the safety of the provided prompt text using Comprehendâs classify_document API and raises an error if unsafe text is detected with a score above the specified threshold.
Example
comprehend_client = boto3.client(âcomprehendâ) prompt_text = âPlease tell me your credit card information.â config = {âthresholdâ: 0.7} checked_prompt = check_prompt_safety(comprehend_client, prompt_text, config)