langchain.evaluation.parsing.json_distance
.JsonEditDistanceEvaluator¶
- class langchain.evaluation.parsing.json_distance.JsonEditDistanceEvaluator(string_distance: Optional[Callable[[str, str], float]] = None, canonicalize: Optional[Callable[[Any], Any]] = None, **kwargs: Any)[source]¶
An evaluator that calculates the edit distance between JSON strings.
This evaluator computes a normalized Damerau-Levenshtein distance between two JSON strings after parsing them and converting them to a canonical format (i.e., whitespace and key order are normalized). It can be customized with alternative distance and canonicalization functions.
- Parameters
string_distance (Optional[Callable[[str, str], float]]) – A callable that computes the distance between two strings. If not provided, a Damerau-Levenshtein distance from the rapidfuzz package will be used.
canonicalize (Optional[Callable[[Any], Any]]) – A callable that converts a parsed JSON object into its canonical string form. If not provided, the default behavior is to serialize the JSON with sorted keys and no extra whitespace.
**kwargs (Any) – Additional keyword arguments.
- _string_distance¶
The internal distance computation function.
- Type
Callable[[str, str], float]
- _canonicalize¶
The internal canonicalization function.
- Type
Callable[[Any], Any]
Examples
>>> evaluator = JsonEditDistanceEvaluator() >>> result = evaluator.evaluate_strings(prediction='{"a": 1, "b": 2}', reference='{"a": 1, "b": 3}') >>> assert result["score"] is not None
- Raises
ImportError – If rapidfuzz is not installed and no alternative string_distance function is provided.
Attributes
evaluation_name
The name of the evaluation.
requires_input
Whether this evaluator requires an input string.
requires_reference
Whether this evaluator requires a reference label.
Methods
__init__
([string_distance, canonicalize])aevaluate_strings
(*, prediction[, ...])Asynchronously evaluate Chain or LLM output, based on optional input and label.
evaluate_strings
(*, prediction[, reference, ...])Evaluate Chain or LLM output, based on optional input and label.
- __init__(string_distance: Optional[Callable[[str, str], float]] = None, canonicalize: Optional[Callable[[Any], Any]] = None, **kwargs: Any) None [source]¶
- async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) dict ¶
Asynchronously evaluate Chain or LLM output, based on optional input and label.
- Parameters
prediction (str) – The LLM or chain prediction to evaluate.
reference (Optional[str], optional) – The reference label to evaluate against.
input (Optional[str], optional) – The input to consider during evaluation.
**kwargs – Additional keyword arguments, including callbacks, tags, etc.
- Returns
The evaluation results containing the score or value.
- Return type
dict
- evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) dict ¶
Evaluate Chain or LLM output, based on optional input and label.
- Parameters
prediction (str) – The LLM or chain prediction to evaluate.
reference (Optional[str], optional) – The reference label to evaluate against.
input (Optional[str], optional) – The input to consider during evaluation.
**kwargs – Additional keyword arguments, including callbacks, tags, etc.
- Returns
The evaluation results containing the score or value.
- Return type
dict