CANet: A Context-Aware Network for Shadow Removal

Abstract

In this paper, we propose a novel two-stage context-aware network named CANet for shadow removal, in which the contextual information from non-shadow regions is transferred to shadow regions at the embedded feature spaces. At Stage-I, we propose a contextual patch matching module to generate a set of potential matching pairs of shadow and non-shadow patches. Combined with the potential contextual relationships between shadow and non-shadow regions, our well-designed contextual feature transfer (CFT) mechanism can transfer contextual information from non-shadow to shadow regions at different scales. With the reconstructed feature maps, we remove shadows at L and A/B channels separately. At Stage-II, we use an encoder-decoder to refine current results and generate the final shadow removal results. We evaluate our proposed CANet on two benchmark datasets and some real-world shadow images with complex scenes. Extensive experiment results strongly demonstrate the efficacy of our proposed CANet and exhibit superior performance to state-of-the-arts.

Cite

Text

Chen et al. "CANet: A Context-Aware Network for Shadow Removal." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00470

Markdown

[Chen et al. "CANet: A Context-Aware Network for Shadow Removal." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/chen2021iccv-canet/) doi:10.1109/ICCV48922.2021.00470

BibTeX

@inproceedings{chen2021iccv-canet,
  title     = {{CANet: A Context-Aware Network for Shadow Removal}},
  author    = {Chen, Zipei and Long, Chengjiang and Zhang, Ling and Xiao, Chunxia},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {4743-4752},
  doi       = {10.1109/ICCV48922.2021.00470},
  url       = {https://mlanthology.org/iccv/2021/chen2021iccv-canet/}
}