Contrastive Monotonic Pixel-Level Modulation
Abstract
Continuous one-to-many mapping is a less investigated yet important task in both low-level visions and neural image translation. In this paper, we present a new formulation called MonoPix, an unsupervised and contrastive continuous modulation model, and take a step further to enable a pixel-level spatial control which is critical but can not be properly handled previously. The key feature of this work is to model the monotonicity between controlling signals and the domain discriminator with a novel contrastive modulation framework and corresponding monotonicity constraints. We have also introduced a selective inference strategy with logarithmic approximation complexity and support fast domain adaptations. The state-of-the-art performance is validated on a variety of continuous mapping tasks, including AFHQ cat-dog and Yosemite summer-winter translation. The introduced approach also helps to provide a new solution for many low-level tasks like low-light enhancement and natural noise generation, which is beyond the long-established practice of one-to-one training and inference. Code is available at https://github.com/lukun199/MonoPix.
Cite
Text
Lu et al. "Contrastive Monotonic Pixel-Level Modulation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19784-0_29Markdown
[Lu et al. "Contrastive Monotonic Pixel-Level Modulation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/lu2022eccv-contrastive/) doi:10.1007/978-3-031-19784-0_29BibTeX
@inproceedings{lu2022eccv-contrastive,
title = {{Contrastive Monotonic Pixel-Level Modulation}},
author = {Lu, Kun and Li, Rongpeng and Zhang, Honggang},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19784-0_29},
url = {https://mlanthology.org/eccv/2022/lu2022eccv-contrastive/}
}