Synthetic Convolutional Features for Improved Semantic Segmentation
Abstract
Recently, learning-based image synthesis has enabled to generate high-resolution images, either applying popular adversarial training or a powerful perceptual loss. However, it remains challenging to successfully leverage synthetic data for improving semantic segmentation with additional synthetic images. Therefore, we suggest to generate intermediate convolutional features and propose the first synthesis approach that is catered to such intermediate convolutional features. This allows us to generate new features from label masks and include them successfully into the training procedure in order to improve the performance of semantic segmentation. Experimental results and analysis on two challenging datasets Cityscapes and ADE20K show that our generated feature improves performance on segmentation tasks.
Cite
Text
He et al. "Synthetic Convolutional Features for Improved Semantic Segmentation." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66823-5_19Markdown
[He et al. "Synthetic Convolutional Features for Improved Semantic Segmentation." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/he2020eccvw-synthetic/) doi:10.1007/978-3-030-66823-5_19BibTeX
@inproceedings{he2020eccvw-synthetic,
title = {{Synthetic Convolutional Features for Improved Semantic Segmentation}},
author = {He, Yang and Schiele, Bernt and Fritz, Mario},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {320-336},
doi = {10.1007/978-3-030-66823-5_19},
url = {https://mlanthology.org/eccvw/2020/he2020eccvw-synthetic/}
}