Self-Supervised Neural Aggregation Networks for Human Parsing
Abstract
In this paper, we present a Self-Supervised Neural Aggregation Network (SS-NAN) for human parsing. SS-NAN adaptively learns to aggregate the multi-scale features at each pixel "address". In order to further improve the feature discriminative capacity, a self-supervised joint loss is adopted as an auxiliary learning strategy, which imposes human joint structures into parsing results without resorting to extra supervision. The proposed SS-NAN is end-to-end trainable. SS-NAN can be integrated into any advanced neural networks to help aggregate features regarding the importance at different positions and scales and incorporate rich high-level knowledge regarding human joint structures from a global perspective, which in turn improve the parsing results. Comprehensive evaluations on the recent Look into Person (LIP) and the PASCAL-Person-Part benchmark datasets demonstrate the significant superiority of our method over other state-of-the-arts.
Cite
Text
Zhao et al. "Self-Supervised Neural Aggregation Networks for Human Parsing." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.204Markdown
[Zhao et al. "Self-Supervised Neural Aggregation Networks for Human Parsing." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/zhao2017cvprw-selfsupervised/) doi:10.1109/CVPRW.2017.204BibTeX
@inproceedings{zhao2017cvprw-selfsupervised,
title = {{Self-Supervised Neural Aggregation Networks for Human Parsing}},
author = {Zhao, Jian and Li, Jianshu and Nie, Xuecheng and Zhao, Fang and Chen, Yunpeng and Wang, Zhecan and Feng, Jiashi and Yan, Shuicheng},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2017},
pages = {1595-1603},
doi = {10.1109/CVPRW.2017.204},
url = {https://mlanthology.org/cvprw/2017/zhao2017cvprw-selfsupervised/}
}