[LEADS-230] fix: missing metric_metadata value in csv#163
[LEADS-230] fix: missing metric_metadata value in csv#163asamal4 wants to merge 1 commit intolightspeed-core:mainfrom
Conversation
WalkthroughThe PR renames the Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/lightspeed_evaluation/pipeline/evaluation/evaluator.py (1)
488-512:⚠️ Potential issue | 🟡 MinorMinor robustness concern:
_extract_metadata_for_csvcould raise inside error-handling path.
_create_error_resultis called fromexceptblocks (e.g., Line 178). Ifself._extract_metadata_for_csv(request)fails (e.g.,get_metric_metadataraises), the original error is swallowed and replaced with an unrelated exception. Consider wrapping the call defensively:Proposed fix
def _create_error_result( self, request: EvaluationRequest, reason: str, execution_time: float ) -> EvaluationResult: """Create an ERROR result for failed evaluation.""" turn_data = request.turn_data + try: + metadata = self._extract_metadata_for_csv(request) + except Exception: # noqa: BLE001 + metadata = None return EvaluationResult( conversation_group_id=request.conv_data.conversation_group_id, tag=request.conv_data.tag, turn_id=request.turn_id, metric_identifier=request.metric_identifier, - metric_metadata=self._extract_metadata_for_csv(request), + metric_metadata=metadata, result="ERROR",
🧹 Nitpick comments (1)
tests/unit/core/output/test_generator.py (1)
362-386: Consider assertingmetric_metadatafor FAIL and ERROR rows too.The FAIL and ERROR results don't set
metric_metadata, so they'll serialize as empty strings in the CSV. Adding assertions for those rows would guard against regressions (e.g., accidental default values).assert rows[1]["metric_metadata"] == "" assert rows[2]["metric_metadata"] == ""
Description
We are supposed to store metric_metadata for each metric in CSV file. This column should contain all properties for a metric that will be used for evaluation (except metric identifier and threshold).
Currently the field doesn't contain any value, because of incorrect property name.
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit