Self-Paced Multi-Label Learning with Diversity
Abstract
The major challenge of learning from multi-label data has arisen from the overwhelming size of label space which makes this problem NP-hard. This problem can be alleviated by gradually involving easy to hard tags into the learning process. Besides, the utilization of a diversity maintenance approach avoids overfitting on a subset of easy labels. In this paper, we propose a self-paced multi-label learning with diversity (SPMLD) which aims to cover diverse labels with respect to its learning pace. In addition, the proposed framework is applied to an efficient correlation-based multi-label method. The non-convex objective function is optimized by an extension of the block coordinate descent algorithm. Empirical evaluations on real-world datasets with different dimensions of features and labels imply the effectiveness of the proposed predictive model.
Cite
Text
Seyedi et al. "Self-Paced Multi-Label Learning with Diversity." Proceedings of The Eleventh Asian Conference on Machine Learning, 2019.Markdown
[Seyedi et al. "Self-Paced Multi-Label Learning with Diversity." Proceedings of The Eleventh Asian Conference on Machine Learning, 2019.](https://mlanthology.org/acml/2019/seyedi2019acml-selfpaced/)BibTeX
@inproceedings{seyedi2019acml-selfpaced,
title = {{Self-Paced Multi-Label Learning with Diversity}},
author = {Seyedi, Seyed Amjad and Ghodsi, S. Siamak and Akhlaghian, Fardin and Jalili, Mahdi and Moradi, Parham},
booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning},
year = {2019},
pages = {790-805},
volume = {101},
url = {https://mlanthology.org/acml/2019/seyedi2019acml-selfpaced/}
}