Optimized Expected Information Gain for Nonlinear Dynamical Systems

Abstract

This paper addresses the problem of active model selection for nonlinear dynamical systems. We propose a novel learning approach that selects the most informative subset of time-dependent variables for the purpose of Bayesian model inference. The model selection criterion maximizes the expected Kullback-Leibler divergence between the prior and the posterior probabilities over the models. The proposed strategy generalizes the standard D-optimal design, which is obtained from a uniform prior with Gaussian noise. In addition, our approach allows us to determine an information halting criterion for model identification. We illustrate the benefits of our approach by differentiating between 18 published biochemical models of the TOR signaling pathway, a model selection problem in systems biology. By generating pivotal selection experiments, our strategy outperforms the standard A-optimal, D-optimal and E-optimal sequential design techniques.

Cite

Text

Busetto et al. "Optimized Expected Information Gain for Nonlinear Dynamical Systems." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553387

Markdown

[Busetto et al. "Optimized Expected Information Gain for Nonlinear Dynamical Systems." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/busetto2009icml-optimized/) doi:10.1145/1553374.1553387

BibTeX

@inproceedings{busetto2009icml-optimized,
  title     = {{Optimized Expected Information Gain for Nonlinear Dynamical Systems}},
  author    = {Busetto, Alberto Giovanni and Ong, Cheng Soon and Buhmann, Joachim M.},
  booktitle = {International Conference on Machine Learning},
  year      = {2009},
  pages     = {97-104},
  doi       = {10.1145/1553374.1553387},
  url       = {https://mlanthology.org/icml/2009/busetto2009icml-optimized/}
}