Bridge the Inference Gaps of Neural Processes via Expectation Maximization

Abstract

The neural process (NP) is a family of computationally efficient models for learning distributions over functions. However, it suffers from under-fitting and shows suboptimal performance in practice. Researchers have primarily focused on incorporating diverse structural inductive biases, e.g. attention or convolution, in modeling. The topic of inference suboptimality and an analysis of the NP from the optimization objective perspective has hardly been studied in earlier work. To fix this issue, we propose a surrogate objective of the target log-likelihood of the meta dataset within the expectation maximization framework. The resulting model, referred to as the Self-normalized Importance weighted Neural Process (SI-NP), can learn a more accurate functional prior and has an improvement guarantee concerning the target log-likelihood. Experimental results show the competitive performance of SI-NP over other NPs objectives and illustrate that structural inductive biases, such as attention modules, can also augment our method to achieve SOTA performance.

Cite

Text

Wang et al. "Bridge the Inference Gaps of Neural Processes via Expectation Maximization." International Conference on Learning Representations, 2023.

Markdown

[Wang et al. "Bridge the Inference Gaps of Neural Processes via Expectation Maximization." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/wang2023iclr-bridge/)

BibTeX

@inproceedings{wang2023iclr-bridge,
  title     = {{Bridge the Inference Gaps of Neural Processes via Expectation Maximization}},
  author    = {Wang, Qi and Federici, Marco and van Hoof, Herke},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/wang2023iclr-bridge/}
}