Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation
Abstract
Despite rapid advancements in TTS models, a consistent and robust human evaluation framework is still lacking. For example, MOS tests fail to differentiate between similar models, and CMOS's pairwise comparisons are time-intensive. The MUSHRA test is a promising alternative for evaluating multiple TTS systems simultaneously, but in this work we show that its reliance on matching human reference speech unduly penalises the scores of modern TTS systems that can exceed human speech quality. More specifically, we conduct a comprehensive assessment of the MUSHRA test, focusing on its sensitivity to factors such as rater variability, listener fatigue, and reference bias. Based on our extensive evaluation involving 492 human listeners across Hindi and Tamil we identify two primary shortcomings: (i) reference-matching bias, where raters are unduly influenced by the human reference, and (ii) judgement ambiguity, arising from a lack of clear fine-grained guidelines. To address these issues, we propose two refined variants of the MUSHRA test. The first variant enables fairer ratings for synthesized samples that surpass human reference quality. The second variant reduces ambiguity, as indicated by the relatively lower variance across raters. By combining these approaches, we achieve both more reliable and more fine-grained assessments. We also release MANGO, a massive dataset of 246,000 human ratings, the first-of-its-kind collection for Indian languages, aiding in analyzing human preferences and developing automatic metrics for evaluating TTS systems.
Cite
Text
Varadhan et al. "Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation." Transactions on Machine Learning Research, 2025.Markdown
[Varadhan et al. "Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/varadhan2025tmlr-rethinking/)BibTeX
@article{varadhan2025tmlr-rethinking,
title = {{Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation}},
author = {Varadhan, Praveen Srinivasa and Gulati, Amogh and Sankar, Ashwin and Anand, Srija and Gupta, Anirudh and Mukherjee, Anirudh and Marepally, Shiva Kumar and Bhatia, Ankur and Jaju, Saloni and Bhooshan, Suvrat and Khapra, Mitesh M},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/varadhan2025tmlr-rethinking/}
}