Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

Abstract

Current work in object-centric learning has been motivated by developing learning algorithms that infer independent and symmetric entities from the perceptual input. This often requires the use iterative refinement procedures that break symmetries among equally plausible explanations for the data, but most prior works differentiate through the unrolled refinement process, which can make optimization exceptionally challenging. In this work, we observe that such iterative refinement methods can be made differentiable by means of the implicit function theorem, and develop an implicit differentiation approach that improves the stability and tractability of training such models by decoupling the forward and backward passes. This connection enables us to apply recent advances in optimizing implicit layers to not only improve the stability and optimization of the slot attention module in SLATE, a state-of-the-art method for learning entity representations, but do so with constant space and time complexity in backpropagation and only one additional line of code.

Cite

Text

Chang et al. "Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation." Neural Information Processing Systems, 2022.

Markdown

[Chang et al. "Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/chang2022neurips-object/)

BibTeX

@inproceedings{chang2022neurips-object,
  title     = {{Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation}},
  author    = {Chang, Michael and Griffiths, Tom and Levine, Sergey},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/chang2022neurips-object/}
}