A Python application that uses computer vision to analyze motorcycle POV videos, extracting:
- Speedometer readings
- Potential highlights from your ride (other vehicles, traffic signals, road signs)
- TODO: Integrate GoPro telemetry data for HERO5 Black and newer cameras
This is an experimental personal project which is not yet intended to be a one-size-fits-all solution. The visual speed tracking feature remains untested on motorcycles with analog gauges. What I do know is that it works excellently on videos of my 2015 Honda CB300F, with its seven-segment display, under optimal visibility conditions. The speed tracking performs best when your videos are taken in daylight or when the dash's backlight is on, without glare, too much blur, or any obstructions. Basically, the program expects what you might expect in order for the dashboard to be legible to the naked eye. It must also be able to identify a region of interest that resembles a motorcycle, meaning that your camera needs to be angled far enough down to see some other parts of your bike.
That being said, the program is somewhat robust, and makes every effort to remove noisy speed readings that are obviously outliers, and the interval of frames to analyze is configurable, in case you want more sensitivity. It also can skip past brief corrupted segments in the video, which I have experienced from my own testing.
- Create a virtual environment:
python -m venv venv
source venv/bin/activate- Install dependencies:
# remove --index-url if using NVIDIA, keep for AMD
pip install torch torchvision --index-url https://download.pytorch.org/whl/rocm6.3
# rest of dependencies
pip install -r requirements.txt- Get an OpenRouter API key (for reading the speedometer) and add it to a
.envfile:
OPENROUTER_API_KEY=sk-or-v1-999999999999...Note: with the default VLM, Gemini 2.5 Flash Experimental, a single request (one speedometer reading) is usually under USD$0.0005. You can configure the interval at which frames are analyzed to suit your needs (the default is one frame per second).
Traditional OCR libraries like EasyOCR and Tesseract are locally runnable, but seem to be quite lacking for this task even when the frames are preprocessed somewhat. You have to be very precise with your ROI, which is not something I'm attempting for now.
Enter your virtual environment and run the script:
python gui.pyThen, set the options in the "Processing" tab and you're good to go.
You probably won't need to manually open these files, since the GUI has an "Analytics" tab which displays everything for you.
In case you ever need the raw data, the output files are placed in the following directory structure:
output
└── my_clip
├── analysis_results.json
├── frames
│ ├── frame_00000.jpg
│ ├── frame_00029.jpg
│ └── ...
└── speed_plot.png

