Subjective Learning for Conflicting Data
Abstract
Conventional supervised learning typically assumes that the learning task can be solved by approximating a single target function. However, this assumption is often invalid in open-ended environments where no manual task-level data partitioning is available. In this paper, we investigate a more general setting where training data is sampled from multiple domains while the data in each domain conforms to a domain-specific target function. When different domains possess distinct target functions, training data exhibits inherent "conflict'', thus rendering single-model training problematic. To address this issue, we propose a framework termed subjective learning where the key component is a subjective function that automatically allocates the data among multiple candidate models to resolve the conflict in multi-domain data, and draw an intriguing connection between subjective learning and a variant of Expectation-Maximization. We present theoretical analysis on the learnability and the generalization error of our approach, and empirically show its efficacy and potential applications in a range of regression and classification tasks with synthetic data.
Cite
Text
Zhang et al. "Subjective Learning for Conflicting Data." ICLR 2022 Workshops: ALOE, 2022.Markdown
[Zhang et al. "Subjective Learning for Conflicting Data." ICLR 2022 Workshops: ALOE, 2022.](https://mlanthology.org/iclrw/2022/zhang2022iclrw-subjective/)BibTeX
@inproceedings{zhang2022iclrw-subjective,
title = {{Subjective Learning for Conflicting Data}},
author = {Zhang, Tianren and Jiang, Yizhou and Su, Xin and Guo, Shangqi and Gao, Chongkai and Chen, Feng},
booktitle = {ICLR 2022 Workshops: ALOE},
year = {2022},
url = {https://mlanthology.org/iclrw/2022/zhang2022iclrw-subjective/}
}