Discovering Multiple Constraints That Are Frequently Approximately Satisfied
Abstract
Some high-dimensional datasets can be modelled by assuming that there are many different linear constraints, each of which is Frequently Approximately Satisfied (FAS) by the data. The probability of a data vector under the model is then proportional to the product of the probabilities of its constraint violations. We describe three methods of learning products of constraints using a heavy-tailed probability distribution for the violations.
Cite
Text
Hinton and Teh. "Discovering Multiple Constraints That Are Frequently Approximately Satisfied." Conference on Uncertainty in Artificial Intelligence, 2001.Markdown
[Hinton and Teh. "Discovering Multiple Constraints That Are Frequently Approximately Satisfied." Conference on Uncertainty in Artificial Intelligence, 2001.](https://mlanthology.org/uai/2001/hinton2001uai-discovering/)BibTeX
@inproceedings{hinton2001uai-discovering,
title = {{Discovering Multiple Constraints That Are Frequently Approximately Satisfied}},
author = {Hinton, Geoffrey E. and Teh, Yee Whye},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2001},
pages = {227-234},
url = {https://mlanthology.org/uai/2001/hinton2001uai-discovering/}
}