Improved Estimation in Time Varying Models
Abstract
Locally adapted parameterizations of a model (such as locally weighted regression) are expressive but often suffer from high variance. We describe an approach for reducing this variance, based on the idea of estimating simultaneously a transformed space for the model and locally adapted parameterizations expressed in the new space. We present a new problem formulation that captures this idea and illustrate it in the important context of time varying models. We develop an algorithm for learning a set of bases for approximating a time varying sparse network; each learned basis constitutes an archetypal sparse network structure. We also provide an extension for learning task-specific bases. We present empirical results on synthetic data sets, as well as on a BCI EEG classification task.
Cite
Text
Precup and Bachman. "Improved Estimation in Time Varying Models." International Conference on Machine Learning, 2012.Markdown
[Precup and Bachman. "Improved Estimation in Time Varying Models." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/precup2012icml-improved/)BibTeX
@inproceedings{precup2012icml-improved,
title = {{Improved Estimation in Time Varying Models}},
author = {Precup, Doina and Bachman, Philip},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/precup2012icml-improved/}
}