Early Stopping for Deep Image Prior

Abstract

Deep image prior (DIP) and its variants have shown remarkable potential to solve inverse problems in computational imaging (CI), needing no separate training data. Practical DIP models are often substantially overparameterized. During the learning process, these models first learn the desired visual content and then pick up potential modeling and observational noise, i.e., performing early learning then overfitting. Thus, the practicality of DIP hinges on early stopping (ES) that can capture the transition period. In this regard, most previous DIP works for CI tasks only demonstrate the potential of the models, reporting the peak performance against the ground truth but providing no clue about how to operationally obtain near-peak performance without access to the ground truth. In this paper, we set to break this practicality barrier of DIP, and propose an effective ES strategy that consistently detects near-peak performance across several CI tasks and DIP variants. Simply based on the running variance of DIP intermediate reconstructions, our ES method not only outpaces the existing ones---which only work in very narrow regimes, but also remains effective when combined with methods that try to mitigate overfitting.

Cite

Text

Wang et al. "Early Stopping for Deep Image Prior." Transactions on Machine Learning Research, 2023.

Markdown

[Wang et al. "Early Stopping for Deep Image Prior." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/wang2023tmlr-early/)

BibTeX

@article{wang2023tmlr-early,
  title     = {{Early Stopping for Deep Image Prior}},
  author    = {Wang, Hengkang and Li, Taihui and Zhuang, Zhong and Chen, Tiancong and Liang, Hengyue and Sun, Ju},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/wang2023tmlr-early/}
}