Learning from Networked Examples

Abstract

Many machine learning algorithms are based on the assumption that training examples are drawn independently. However, this assumption does not hold anymore when learning from a networked sample because two or more training examples may share some common objects, and hence share the features of these shared objects. We show that the classic approach of ignoring this problem potentially can have a harmful effect on the accuracy of statistics, and then consider alternatives. One of these is to only use independent examples, discarding other information. However, this is clearly suboptimal. We analyze sample error bounds in this networked setting, providing significantly improved results. An important component of our approach is formed by efficient sample weighting schemes, which leads to novel concentration inequalities.

Cite

Text

Wang et al. "Learning from Networked Examples." Proceedings of the 28th International Conference on Algorithmic Learning Theory, 2017.

Markdown

[Wang et al. "Learning from Networked Examples." Proceedings of the 28th International Conference on Algorithmic Learning Theory, 2017.](https://mlanthology.org/alt/2017/wang2017alt-learning/)

BibTeX

@inproceedings{wang2017alt-learning,
  title     = {{Learning from Networked Examples}},
  author    = {Wang, Yuyi and Guo, Zheng-Chu and Ramon, Jan},
  booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory},
  year      = {2017},
  pages     = {641-666},
  volume    = {76},
  url       = {https://mlanthology.org/alt/2017/wang2017alt-learning/}
}