Interpretable Sequence Learning for Covid-19 Forecasting

Abstract

We propose a novel approach that integrates machine learning into compartmental disease modeling (e.g., SEIR) to predict the progression of COVID-19. Our model is explainable by design as it explicitly shows how different compartments evolve and it uses interpretable encoders to incorporate covariates and improve performance. Explainability is valuable to ensure that the model's forecasts are credible to epidemiologists and to instill confidence in end-users such as policy makers and healthcare institutions. Our model can be applied at different geographic resolutions, and we demonstrate it for states and counties in the United States. We show that our model provides more accurate forecasts compared to the alternatives, and that it provides qualitatively meaningful explanatory insights.

Cite

Text

Arik et al. "Interpretable Sequence Learning for Covid-19 Forecasting." Neural Information Processing Systems, 2020.

Markdown

[Arik et al. "Interpretable Sequence Learning for Covid-19 Forecasting." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/arik2020neurips-interpretable/)

BibTeX

@inproceedings{arik2020neurips-interpretable,
  title     = {{Interpretable Sequence Learning for Covid-19 Forecasting}},
  author    = {Arik, Sercan and Li, Chun-Liang and Yoon, Jinsung and Sinha, Rajarishi and Epshteyn, Arkady and Le, Long and Menon, Vikas and Singh, Shashank and Zhang, Leyou and Nikoltchev, Martin and Sonthalia, Yash and Nakhost, Hootan and Kanal, Elli and Pfister, Tomas},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/arik2020neurips-interpretable/}
}