Consistency in Models for Communication Constrained Distributed Learning
Abstract
Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an i.i.d. sampling process amongst members of a network of learning agents. The agents are limited in their ability to communicate to a fusion center; the amount of information available for classification or regression is constrained. For several simple communication models, questions of universal consistency are addressed; i.e., the asymptotics of several agent decision rules and fusion rules are considered in both binary classification and regression frameworks. These models resemble distributed environments and introduce new questions regarding universal consistency. Insofar as these models offer a useful picture of distributed scenarios, this paper considers whether the guarantees provided by Stone’s Theorem in centralized environments hold in distributed settings.
Cite
Text
Predd et al. "Consistency in Models for Communication Constrained Distributed Learning." Annual Conference on Computational Learning Theory, 2004. doi:10.1007/978-3-540-27819-1_31Markdown
[Predd et al. "Consistency in Models for Communication Constrained Distributed Learning." Annual Conference on Computational Learning Theory, 2004.](https://mlanthology.org/colt/2004/predd2004colt-consistency/) doi:10.1007/978-3-540-27819-1_31BibTeX
@inproceedings{predd2004colt-consistency,
title = {{Consistency in Models for Communication Constrained Distributed Learning}},
author = {Predd, Joel B. and Kulkarni, Sanjeev R. and Poor, Harold Vincent},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2004},
pages = {442-456},
doi = {10.1007/978-3-540-27819-1_31},
url = {https://mlanthology.org/colt/2004/predd2004colt-consistency/}
}