This repository contains scripts and tools to automate benchmarking experiments for evaluating wasmCloud invocation performance under various load conditions. It includes HTTP load generators, data collection logic, system resource tracking, and result visualization.
The benchmarking suite includes:
run_benchmark.py: Main automation script that performs deployment, warm-up, load generation, and result recording.path.py: Path utilities and configuration helpers.analyze_results.py: Script for parsing CSVs and generating visualizations using pandas and matplotlib.
Two HTTP benchmarking tools are used and should be installed:
Resource sampling includes memory metrics using psutil. To collect PSS values, root permissions are required. Please run benchmarking with sudo if accurate memory measurements are needed.
To enable verbose logging of load generator output, pass the --save-load-generator-output flag when starting the process
python run_benchmark.py --save-load-generator-output
Raw logs from hey and Vegeta are saved alongside the parsed results.
Confidence intervals are configurable and can be calculated using either:
- Standard deviation
- T-distribution
- Z-distribution
Install Python dependencies using:
pip install -r requirements.txt