DARTS for Inverse Problems: A Study on Stability
Abstract
Differentiable architecture search (DARTS) is a widely researched tool for neural architecture search, due to its promising results for image classification. The main benefit of DARTS is the effectiveness achieved through the weight-sharing one-shot paradigm, which allows efficient architecture search. In this work, we investigate DARTS in a systematic case study of inverse problems, which allows us to analyze these potential benefits in a controlled manner. Although we demonstrate that the success of DARTS can be extended from classification to reconstruction, our experiments yield a fundamental difficulty in the evaluation of DARTS-based methods: The results show a large variance in all test cases and the weight-sharing performance of the architecture found during training does not always reflect its final performance. We conclude the necessity to 1) report the results of any DARTS-based methods from several runs along with its underlying performance statistics and 2) show the correlation between the training and final architecture performance.
Cite
Text
Geiping et al. "DARTS for Inverse Problems: A Study on Stability." NeurIPS 2021 Workshops: Deep_Inverse, 2021.Markdown
[Geiping et al. "DARTS for Inverse Problems: A Study on Stability." NeurIPS 2021 Workshops: Deep_Inverse, 2021.](https://mlanthology.org/neuripsw/2021/geiping2021neuripsw-darts/)BibTeX
@inproceedings{geiping2021neuripsw-darts,
title = {{DARTS for Inverse Problems: A Study on Stability}},
author = {Geiping, Jonas and Lukasik, Jovita and Keuper, Margret and Moeller, Michael},
booktitle = {NeurIPS 2021 Workshops: Deep_Inverse},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/geiping2021neuripsw-darts/}
}