langchain_experimental.comprehend_moderation.base_moderation_exceptions
.ModerationToxicityError¶
- class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError(message: str = 'The prompt contains toxic content and cannot be processed')[source]¶
Exception raised if Toxic entities are detected.
- message -- explanation of the error