SALAD: Self-Assessment Learning for Action Detection
Abstract
Literature on self-assessment in machine learning mainly focuses on the production of well-calibrated algorithms through consensus frameworks i.e. calibration is seen as a problem. Yet, we observe that learning to be properly confident could behave like a powerful regularization and thus, could be an opportunity to improve performance. Precisely, we show that used within a framework of action detection, the learning of a self-assessment score is able to improve the whole action localization process. Experimental results show that our approach outperforms the state-of-the-art on two action detection benchmarks. On THUMOS14 dataset, the mAP at [email protected] is improved from 42.8% to 44.6%, and from 50.4% to 51.7% on ActivityNet1.3 dataset. For lower tIoU values, we achieve even more significant improvements on both datasets.
Cite
Text
Vaudaux-Ruth et al. "SALAD: Self-Assessment Learning for Action Detection." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Vaudaux-Ruth et al. "SALAD: Self-Assessment Learning for Action Detection." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/vaudauxruth2021wacv-salad/)BibTeX
@inproceedings{vaudauxruth2021wacv-salad,
title = {{SALAD: Self-Assessment Learning for Action Detection}},
author = {Vaudaux-Ruth, Guillaume and Chan-Hon-Tong, Adrien and Achard, Catherine},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {1269-1278},
url = {https://mlanthology.org/wacv/2021/vaudauxruth2021wacv-salad/}
}