Learning Self-Prior for Mesh Denoising Using Dual Graph Convolutional Networks
Abstract
This study proposes a deep-learning framework for mesh denoising from a single noisy input, where two graph convolutional networks are trained jointly to filter vertex positions and facet normals apart. The prior obtained only from a single input is particularly referred to as a self-prior. The proposed method leverages the framework of the deep image prior (DIP), which obtains the self-prior for image restoration using a convolutional neural network (CNN). Thus, we obtain a denoised mesh without any ground-truth noise-free meshes. Compared to the original DIP that transforms a fixed random code into a noise-free image by the neural network, we reproduce vertex displacement from a fixed random code and reproduce facet normals from feature vectors that summarize local triangle arrangements. After tuning several hyperparameters with a few validation samples, our method achieved significantly higher performance than traditional approaches working with a single noisy input mesh. Moreover, its performance is better than the other methods using deep neural networks trained with a large-scale shape dataset. The independence of our method of either large-scale datasets or ground-truth noise-free mesh will allow us to easily denoise meshes whose shapes are rarely included in the shape datasets.
Cite
Text
Hattori et al. "Learning Self-Prior for Mesh Denoising Using Dual Graph Convolutional Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20062-5_21Markdown
[Hattori et al. "Learning Self-Prior for Mesh Denoising Using Dual Graph Convolutional Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/hattori2022eccv-learning/) doi:10.1007/978-3-031-20062-5_21BibTeX
@inproceedings{hattori2022eccv-learning,
title = {{Learning Self-Prior for Mesh Denoising Using Dual Graph Convolutional Networks}},
author = {Hattori, Shota and Yatagawa, Tatsuya and Ohtake, Yutaka and Suzuki, Hiromasa},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20062-5_21},
url = {https://mlanthology.org/eccv/2022/hattori2022eccv-learning/}
}