fix: benchmark overview shows wrong latency metric#245
Conversation
JmhBenchmarkConverter.createOverview() was using a heuristic (min avgt score) to select the latency benchmark, ignoring the explicit names configured in ReportConfiguration. This caused the dashboard to show measureConcurrentValidation instead of the configured measureAverageTime. Changes: - JmhBenchmarkConverter: accept configured benchmark names and prefer them over the min/max heuristic in createOverview() - BenchmarkResultProcessor: pass ReportConfiguration to the converter - Remove redundant measureConcurrentValidation benchmark (identical to measureAverageTime) from SimpleCoreValidationBenchmark, CoreJfrBenchmark, and UnifiedJfrBenchmark - Remove identical validateWithRotation() from CoreValidationDelegate - Update test data and test references accordingly Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello @cuioss-oliver, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves an issue where the benchmark overview incorrectly displayed latency metrics. By enhancing the benchmark converter to respect explicit configuration for metric names, it ensures that performance dashboards present accurate data. The changes also streamline the benchmarking codebase by removing redundant benchmark methods and associated dead code, improving maintainability and clarity. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively addresses the issue of the wrong latency metric being displayed in the benchmark overview. The fix correctly uses configured benchmark names for selecting throughput and latency metrics, making the selection process more robust. I appreciate the additional cleanup of removing the redundant measureConcurrentValidation benchmark and associated dead code, which improves the overall maintainability of the codebase. The changes are well-structured and logical. I have one suggestion to further improve the robustness of the benchmark selection logic, recommending the use of equals() instead of contains() for more precise matching.
...king-common/src/main/java/de/cuioss/benchmarking/common/converter/JmhBenchmarkConverter.java
Outdated
Show resolved
Hide resolved
Prevents incorrect matches when one benchmark name is a substring of another's. Addresses Gemini review feedback. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
JmhBenchmarkConverter.createOverview()now uses the configured benchmark names fromReportConfigurationinstead of blindly picking the minimum-scoringavgtbenchmark. This ensures the dashboard shows the correctmeasureAverageTimeas the latency metric, notmeasureConcurrentValidation.measureConcurrentValidationwas identical tomeasureAverageTime(both calledvalidateWithFullSpectrum()/validateWithRotation()which were copy-paste identical). Removed fromSimpleCoreValidationBenchmark,CoreJfrBenchmark, andUnifiedJfrBenchmark.CoreValidationDelegate.validateWithRotation()was identical tovalidateWithFullSpectrum().Test plan
./mvnw clean install -pl benchmarking/benchmarking-common— 181 tests pass./mvnw clean install -pl benchmarking/benchmark-core— 2 tests pass, compilation cleanshouldCalculatePerformanceScoreAccordingToDocumentationverifies correct benchmark selection from real JMH data🤖 Generated with Claude Code