Learning Nonlinear Dynamical Systems Using an EM Algorithm
Abstract
The Expectation-Maximization (EM) algorithm is an iterative pro(cid:173) cedure for maximum likelihood parameter estimation from data sets with missing or hidden variables [2]. It has been applied to system identification in linear stochastic state-space models, where the state variables are hidden from the observer and both the state and the parameters of the model have to be estimated simulta(cid:173) neously [9]. We present a generalization of the EM algorithm for parameter estimation in nonlinear dynamical systems. The "expec(cid:173) tation" step makes use of Extended Kalman Smoothing to estimate the state, while the "maximization" step re-estimates the parame(cid:173) ters using these uncertain state estimates. In general, the nonlinear maximization step is difficult because it requires integrating out the uncertainty in the states. However, if Gaussian radial basis func(cid:173) tion (RBF) approximators are used to model the nonlinearities, the integrals become tractable and the maximization step can be solved via systems of linear equations.
Cite
Text
Ghahramani and Roweis. "Learning Nonlinear Dynamical Systems Using an EM Algorithm." Neural Information Processing Systems, 1998.Markdown
[Ghahramani and Roweis. "Learning Nonlinear Dynamical Systems Using an EM Algorithm." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/ghahramani1998neurips-learning/)BibTeX
@inproceedings{ghahramani1998neurips-learning,
title = {{Learning Nonlinear Dynamical Systems Using an EM Algorithm}},
author = {Ghahramani, Zoubin and Roweis, Sam T.},
booktitle = {Neural Information Processing Systems},
year = {1998},
pages = {431-437},
url = {https://mlanthology.org/neurips/1998/ghahramani1998neurips-learning/}
}