Temporal Rate Reduction Clustering for Human Motion Segmentation
Abstract
Human Motion Segmentation (HMS), which aims to partition videos into non-overlapping human motions, has attracted increasing research attention recently. Existing approaches for HMS are mainly dominated by subspace clustering methods, which are grounded on the assumption that high-dimensional temporal data align with a Union-of-Subspaces (UoS) distribution. However, the frames in video capturing complex human motions with cluttered backgrounds may not align well with the UoS distribution. In this paper, we propose a novel approach for HMS, named Temporal Rate Reduction Clustering (\text TR ^2\text C ), which jointly learns structured representations and affinity to segment the sequences of frames in video. Specifically, the structured representations learned by \text TR ^2\text C enjoy temporally consistency and are aligned well with a UoS structure, which is favorable for addressing the HMS task. We conduct extensive experiments on five benchmark HMS datasets and achieve state-of-the-art performances with different feature extractors. The code is available at: https://github.com/mengxianghan123/TR2C.
Cite
Text
Meng et al. "Temporal Rate Reduction Clustering for Human Motion Segmentation." International Conference on Computer Vision, 2025.Markdown
[Meng et al. "Temporal Rate Reduction Clustering for Human Motion Segmentation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/meng2025iccv-temporal/)BibTeX
@inproceedings{meng2025iccv-temporal,
title = {{Temporal Rate Reduction Clustering for Human Motion Segmentation}},
author = {Meng, Xianghan and Tong, Zhengyu and Huang, Zhiyuan and Li, Chun-Guang},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {14644-14654},
url = {https://mlanthology.org/iccv/2025/meng2025iccv-temporal/}
}