Linear Concepts and Hidden Variables: An Empirical Study

Abstract

Some learning techniques for classification tasks work indirectly, by first trying to fit a full probabilistic model to the observed data. Whether this is a good idea or not depends on the robustness with respect to deviations from the postulated model. We study this question experimentally in a restricted, yet non-trivial and interesting case: we consider a conditionally independent attribute (CIA) model which postulates a single binary-valued hidden variable z on which all other attributes (i.e., the target and the observables) depend. In this model, finding the most likely value of anyone variable (given known values for the others) reduces to testing a linear function of the observed values.

Cite

Text

Grove and Roth. "Linear Concepts and Hidden Variables: An Empirical Study." Neural Information Processing Systems, 1997.

Markdown

[Grove and Roth. "Linear Concepts and Hidden Variables: An Empirical Study." Neural Information Processing Systems, 1997.](https://mlanthology.org/neurips/1997/grove1997neurips-linear/)

BibTeX

@inproceedings{grove1997neurips-linear,
  title     = {{Linear Concepts and Hidden Variables: An Empirical Study}},
  author    = {Grove, Adam J. and Roth, Dan},
  booktitle = {Neural Information Processing Systems},
  year      = {1997},
  pages     = {500-506},
  url       = {https://mlanthology.org/neurips/1997/grove1997neurips-linear/}
}