Neural Image Compression: Generalization, Robustness, and Spectral Biases

Abstract

Recent neural image compression (NIC) advances have produced models which are starting to outperform traditional codecs. While this has led to growing excitement about using NIC in real-world applications, the successful adoption of any machine learning system in the wild requires it to generalize (and be robust) to unseen distribution shifts at deployment. Unfortunately, current research lacks comprehensive datasets and informative tools to evaluate and understand NIC performance in real-world settings. To bridge this crucial gap, we provide a comprehensive benchmark suite to evaluate the out-of-distribution (OOD) performance of image compression methods and propose spectrally inspired inspection tools to gain deeper insight into errors introduced by image compression methods as well as their OOD performance. We then carry out a detailed performance comparison of a classical codec with NIC variants, revealing intriguing findings that challenge our current understanding of NIC.

Cite

Text

Lieberman et al. "Neural Image Compression: Generalization, Robustness, and Spectral Biases." ICML 2023 Workshops: NCW, 2023.

Markdown

[Lieberman et al. "Neural Image Compression: Generalization, Robustness, and Spectral Biases." ICML 2023 Workshops: NCW, 2023.](https://mlanthology.org/icmlw/2023/lieberman2023icmlw-neural/)

BibTeX

@inproceedings{lieberman2023icmlw-neural,
  title     = {{Neural Image Compression: Generalization, Robustness, and Spectral Biases}},
  author    = {Lieberman, Kelsey and Diffenderfer, James and Godfrey, Charles and Kailkhura, Bhavya},
  booktitle = {ICML 2023 Workshops: NCW},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/lieberman2023icmlw-neural/}
}