Versatile Evaluation of Speech and Audio
-
Updated
Dec 9, 2025 - Python
Versatile Evaluation of Speech and Audio
A standalone tool for evaluating Automatic Speech Recognition (ASR) models, particularly optimized for medical/clinical speech recognition, using Word Error Rate (WER) metric
Small, repeatable audio stress-test harness for evaluating acoustic indicator robustness under adversarial perturbations. Produces deferral signals, not binary labels.
🔊 Evaluate audio performance with the MiMo-Audio-Eval toolkit, designed for accurate assessment and streamlined analysis in audio processing tasks.
Add a description, image, and links to the audio-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the audio-evaluation topic, visit your repo's landing page and select "manage topics."