Learning Large Scale Common Sense Models of Everyday Life
Abstract
Recent work has shown promise in using large, publicly avail-able, hand-contributed commonsense databases as joint mod-els that can be used to infer human state from day-to-day sen-sor data. The parameters of these models are mined from the web. We show in this paper that learning these parameters using sensor data (with the mined parameters as priors) can improve performance of the models significantly. The pri-mary challenge in learning is scale. Since the model com-prises roughly 50,000 irregularly connected nodes in each time slice, it is intractable either to completely label observed data manually or to compute the expected likelihood of even a single time slice. We show how to solve the resulting semi-supervised learning problem by combining a variety of con-ventional approximation techniques and a novel technique for simplifying the model called context-based pruning. We show empirically that the learned model is substantially bet-ter at interpreting sensor data and an detailed analysis of how various techniques contribute to the performance.
Cite
Text
Pentney et al. "Learning Large Scale Common Sense Models of Everyday Life." AAAI Conference on Artificial Intelligence, 2007.Markdown
[Pentney et al. "Learning Large Scale Common Sense Models of Everyday Life." AAAI Conference on Artificial Intelligence, 2007.](https://mlanthology.org/aaai/2007/pentney2007aaai-learning/)BibTeX
@inproceedings{pentney2007aaai-learning,
title = {{Learning Large Scale Common Sense Models of Everyday Life}},
author = {Pentney, William and Philipose, Matthai and Bilmes, Jeff A. and Kautz, Henry A.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2007},
pages = {465-470},
url = {https://mlanthology.org/aaai/2007/pentney2007aaai-learning/}
}