AudioGenX: Explainability on Text-to-Audio Generative Models
Abstract
Text-to-audio generation models (TAG) have achieved significant advances in generating audio conditioned on text descriptions. However, a critical challenge lies in the lack of transparency regarding how each textual input impacts the generated audio. To address this issue, we introduce AudioGenX, an Explainable AI (XAI) method that provides explanations for text-to-audio generation models by highlighting the importance of input tokens. AudioGenX optimizes an Explainer by leveraging factual and counterfactual objective functions to provide faithful explanations at the audio token level. This method offers a detailed and comprehensive understanding of the relationship between text inputs and audio outputs, enhancing both the explainability and trustworthiness of TAG models. Extensive experiments demonstrate the effectiveness of AudioGenX in producing faithful explanations, benchmarked against existing methods using novel evaluation metrics specifically designed for audio generation tasks.
Cite
Text
Kang et al. "AudioGenX: Explainability on Text-to-Audio Generative Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I17.33950Markdown
[Kang et al. "AudioGenX: Explainability on Text-to-Audio Generative Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/kang2025aaai-audiogenx/) doi:10.1609/AAAI.V39I17.33950BibTeX
@inproceedings{kang2025aaai-audiogenx,
title = {{AudioGenX: Explainability on Text-to-Audio Generative Models}},
author = {Kang, Hyunju and Han, Geonhee and Jeong, Yoonjae and Park, Hogun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {17733-17741},
doi = {10.1609/AAAI.V39I17.33950},
url = {https://mlanthology.org/aaai/2025/kang2025aaai-audiogenx/}
}