Layer-Importance Guided Adaptive Quantization for Efficient Speech Emotion Recognition
Abstract
Speech Emotion Recognition (SER) systems are crucial for enhancing human-machine interaction. Deep learning models have achieved significant success in SER without manually engineered features, but they require substantial computational resources, processing power, and hyper-parameter tuning, limiting their deployment on edge devices. To address these limitations, we propose an efficient and lightweight Multilayer Perceptron (MLP) classifier within a custom SER framework. Furthermore, we introduce a novel adaptive quantization scheme based on layer importance to reduce model size. This method balances model compression and performance by adaptively selecting bit-width precision for each layer based on its importance, ensuring the quantized model maintains accuracy within an acceptable threshold. Unlike previous mixed-precision methods, which are often complex and costly, our approach is both interpretable and efficient. Our model is evaluated on the benchmark SER datasets, focusing on features such as Mel-Frequency Cepstral Coefficient (MFCC), Chroma, and Mel-spectrogram. Our experiments show that our quantization scheme achieves performance comparable to state-of-the-art methods while significantly reducing model size, making it well-suited for lightweight devices.
Cite
Text
Shinde et al. "Layer-Importance Guided Adaptive Quantization for Efficient Speech Emotion Recognition." NeurIPS 2024 Workshops: Compression, 2024.Markdown
[Shinde et al. "Layer-Importance Guided Adaptive Quantization for Efficient Speech Emotion Recognition." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/shinde2024neuripsw-layerimportance/)BibTeX
@inproceedings{shinde2024neuripsw-layerimportance,
title = {{Layer-Importance Guided Adaptive Quantization for Efficient Speech Emotion Recognition}},
author = {Shinde, Tushar and Jain, Ritika and Sharma, Avinash Kumar},
booktitle = {NeurIPS 2024 Workshops: Compression},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/shinde2024neuripsw-layerimportance/}
}