Sequential Data-Consistent Model Inversion

Abstract

Data-consistent model inversion problems aim to infer distributions of model parameters from distributions of experimental observations. Previous approaches to solving these problems include rejection algorithms, which are impractical for many real-world problems, and generative adversarial networks, which require a differentiable simulation. Here, we introduce a sequential sample refinement algorithm that overcomes these drawbacks. A set of parameters is iteratively refined using density ratio estimates in the model input and output domains, and parameters are resampled by training a generative implicit density estimator. We implement this novel approach using a combination of standard models from artificial intelligence and machine learning, including density estimators, binary classifiers, and diffusion models. To demonstrate the method, we show two examples from computational biology, with different levels of complexity.

Cite

Text

Rumbell et al. "Sequential Data-Consistent Model Inversion." NeurIPS 2023 Workshops: Deep_Inverse, 2023.

Markdown

[Rumbell et al. "Sequential Data-Consistent Model Inversion." NeurIPS 2023 Workshops: Deep_Inverse, 2023.](https://mlanthology.org/neuripsw/2023/rumbell2023neuripsw-sequential/)

BibTeX

@inproceedings{rumbell2023neuripsw-sequential,
  title     = {{Sequential Data-Consistent Model Inversion}},
  author    = {Rumbell, Timothy and Wanjiru, Catherine and Mulang', Isaiah Onando and Obonyo, Stephen and Kozloski, James and Gurev, Viatcheslav},
  booktitle = {NeurIPS 2023 Workshops: Deep_Inverse},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/rumbell2023neuripsw-sequential/}
}