Long-Tail Temporal Action Segmentation with Group-Wise Temporal Logit Adjustment
Abstract
Procedural activity videos often exhibit a long-tailed action distribution due to varying action frequencies and durations. However, state-of-the-art temporal action segmentation methods overlook the long tail and fail to recognize tail actions. Existing long-tail methods make class-independent assumptions and struggle to identify tail classes when applied to temporal segmentation frameworks. This work proposes a novel group-wise temporal logit adjustment (G-TLA) framework that combines a group-wise softmax formulation while leveraging activity information and action ordering for logit adjustment. The proposed framework significantly improves in segmenting tail actions without any performance loss on head actions. Source code is available1 . 1 https://github.com/pangzhan27/GTLA
Cite
Text
Pang et al. "Long-Tail Temporal Action Segmentation with Group-Wise Temporal Logit Adjustment." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73404-5_19Markdown
[Pang et al. "Long-Tail Temporal Action Segmentation with Group-Wise Temporal Logit Adjustment." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/pang2024eccv-longtail/) doi:10.1007/978-3-031-73404-5_19BibTeX
@inproceedings{pang2024eccv-longtail,
title = {{Long-Tail Temporal Action Segmentation with Group-Wise Temporal Logit Adjustment}},
author = {Pang, Zhanzhong and Sener, Fadime and Ramasubramanian, Shrinivas and Yao, Angela},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73404-5_19},
url = {https://mlanthology.org/eccv/2024/pang2024eccv-longtail/}
}