Quantized Generative Models for Solving Inverse Problems

Abstract

Generative priors have been shown to be highly successful in solving inverse problems. In this paper, we consider quantized generative models i.e., the generator network weights come from a learnt finite alphabet. Quantized neural networks are efficient in terms of memory and computation. They are ideally suited for deployment in a practical setting involving low-precision hardware. In this paper, we solve non-linear inverse problems using quantized generative models. We introduce a new meta-learning framework that makes use of proximal operators and jointly optimizes the quantized weights of the generative model, parameters of the sensing network, and the latent-space representation. Experimental validation is carried out using standard datasets – MNIST, CIFAR10, SVHN, and STL10. The results show that the performance of 32-bit networks can be achieved using 4-bit networks. The performance of 1-bit networks is about 0.7 to 2 dB inferior, while saving significantly (32×) on the model size.

Cite

Text

Nareddy et al. "Quantized Generative Models for Solving Inverse Problems." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00167

Markdown

[Nareddy et al. "Quantized Generative Models for Solving Inverse Problems." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/nareddy2023iccvw-quantized/) doi:10.1109/ICCVW60793.2023.00167

BibTeX

@inproceedings{nareddy2023iccvw-quantized,
  title     = {{Quantized Generative Models for Solving Inverse Problems}},
  author    = {Nareddy, Kartheek Kumar Reddy and Killedar, Vinayak and Seelamantula, Chandra Sekhar},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2023},
  pages     = {1520-1525},
  doi       = {10.1109/ICCVW60793.2023.00167},
  url       = {https://mlanthology.org/iccvw/2023/nareddy2023iccvw-quantized/}
}