langchain.evaluation.regex_match.base.RegexMatchStringEvaluator¶

class langchain.evaluation.regex_match.base.RegexMatchStringEvaluator(*, flags: int = 0, **kwargs: Any)[source]¶

Compute a regex match between the prediction and the reference.

Examples

>>> evaluator = RegexMatchStringEvaluator(flags=re.IGNORECASE)
>>> evaluator.evaluate_strings(
        prediction="Mindy is the CTO",
        reference="^mindy.*cto$",
    )  # This will return {'score': 1.0} due to the IGNORECASE flag
>>> evaluator = RegexMatchStringEvaluator()
>>> evaluator.evaluate_strings(
        prediction="Mindy is the CTO",
        reference="^Mike.*CEO$",
    )  # This will return {'score': 0.0}
>>> evaluator.evaluate_strings(
        prediction="Mindy is the CTO",
        reference="^Mike.*CEO$|^Mindy.*CTO$",
    )  # This will return {'score': 1.0} as the prediction matches the second pattern in the union

Attributes

evaluation_name

Get the evaluation name.

input_keys

Get the input keys.

requires_input

This evaluator does not require input.

requires_reference

This evaluator requires a reference.

Methods

__init__(*[, flags])

aevaluate_strings(*, prediction[, ...])

Asynchronously evaluate Chain or LLM output, based on optional input and label.

evaluate_strings(*, prediction[, reference, ...])

Evaluate Chain or LLM output, based on optional input and label.

Parameters
  • flags (int) –

  • kwargs (Any) –

__init__(*, flags: int = 0, **kwargs: Any)[source]¶
Parameters
  • flags (int) –

  • kwargs (Any) –

async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) dict¶

Asynchronously evaluate Chain or LLM output, based on optional input and label.

Parameters
  • prediction (str) – The LLM or chain prediction to evaluate.

  • reference (Optional[str], optional) – The reference label to evaluate against.

  • input (Optional[str], optional) – The input to consider during evaluation.

  • **kwargs – Additional keyword arguments, including callbacks, tags, etc.

Returns

The evaluation results containing the score or value.

Return type

dict

evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) dict¶

Evaluate Chain or LLM output, based on optional input and label.

Parameters
  • prediction (str) – The LLM or chain prediction to evaluate.

  • reference (Optional[str], optional) – The reference label to evaluate against.

  • input (Optional[str], optional) – The input to consider during evaluation.

  • **kwargs – Additional keyword arguments, including callbacks, tags, etc.

Returns

The evaluation results containing the score or value.

Return type

dict