PilotScope now includes utilities to save, load, and compare algorithm performance test results in JSON format with visualization charts.
All test files now automatically save results in JSON format:
# Run baseline test
python test_example_algorithms/test_baseline_performance.py
# Run MSCN test
python test_example_algorithms/test_mscn_example.py
# Run Lero test
python test_example_algorithms/test_lero_example.pyOutput:
results/{algo}_{db}_{timestamp}.json- JSON format with all metricsimg/{algo}_{db}.png- Individual visualization chart
# Compare latest baseline vs MSCN
python algorithm_examples/compare_results.py --latest baseline mscn
# Compare baseline vs MSCN vs Lero
python algorithm_examples/compare_results.py --latest baseline mscn leropython algorithm_examples/compare_results.py \
results/baseline_stats_tiny_20231014_120000.json \
results/mscn_stats_tiny_20231014_130000.json \
results/lero_stats_tiny_20231014_140000.jsonpython algorithm_examples/compare_results.py --listfrom algorithm_examples.utils import (
save_test_result,
compare_algorithms,
list_saved_results,
load_test_results
)
# Save results after your test
save_test_result('my_algorithm', 'stats_tiny', extra_info={'note': 'test run'})
# Compare multiple results
compare_algorithms([
'results/baseline_stats_tiny_20231014_120000.json',
'results/mscn_stats_tiny_20231014_130000.json'
], metric='total_time', output_path='results/my_comparison')
# List all saved results
grouped = list_saved_results()
# Load specific results for custom processing
results = load_test_results([
'results/baseline_stats_tiny_20231014_120000.json',
'results/mscn_stats_tiny_20231014_130000.json'
])algorithm_examples/
├── compare_results.py # Comparison tool
├── utils.py # Utility functions (save/load/compare)
└── test_*.py # Test files (auto-save results)
results/ # Auto-created directory
├── baseline_stats_tiny_20231014_120000.json
├── mscn_stats_tiny_20231014_130000.json
├── lero_stats_tiny_20231014_140000.json
└── comparison_total_time_20231014_150000.png
img/ # Individual algorithm charts
├── baseline_stats_tiny.png
├── mscn_stats_tiny.png
└── lero_stats_tiny.png
- total_time: Total execution time for all queries
- average_time: Average execution time per query
- query_count: Number of queries executed
Save current TimeStatistic data in JSON format.
Parameters:
algo_name(str): Algorithm namedb_name(str): Database nameextra_info(dict, optional): Additional metadata
Returns: Path to saved JSON file
Compare multiple test results and generate a comparison chart.
Parameters:
result_files(list): List of JSON file pathsmetric(str): Metric to compare ('total_time', 'average_time', 'query_count')output_path(str, optional): Output path for chart (without extension)
Returns: Dictionary of comparison data
List all saved test results grouped by algorithm and database.
Returns: Dictionary mapping (algorithm, database) to list of result files
Load multiple test result JSON files.
Parameters:
result_files(list): List of JSON file paths
Returns: Dictionary mapping algorithm names to their metrics
# 1. Run baseline test
python test_example_algorithms/test_baseline_performance.py
# 2. Run MSCN test
python test_example_algorithms/test_mscn_example.py
# 3. List all results
python algorithm_examples/compare_results.py --list
# 4. Compare latest results
python algorithm_examples/compare_results.py --latest baseline mscn
# Output:
# 📊 Comparison chart saved: results/comparison_total_time_20231014_150000.png
#
# ============================================================
# Comparison Results (total_time):
# ============================================================
# mscn : 45.2341s
# baseline : 67.8912s
# ============================================================- All test files automatically save results in JSON format when using
save_test_result() - JSON format enables easy programmatic comparison and custom analysis
- Comparison charts are saved as PNG images
- Timestamps ensure unique filenames and enable result history tracking