AIM 2024 Sparse Neural Rendering Challenge: Methods and Results
Abstract
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. This manuscript focuses on the competition set-up, the proposed methods and their respective results. The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations. It is composed of two tracks, with differing levels of sparsity; 3 views in Track 1 (very sparse) and 9 views in Track 2 (sparse). Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric. For both tracks, we use the newly introduced Spa rse Re ndering (SpaRe) dataset [ 22 ] and the popular DTU MVS dataset [ 1 ]. In this challenge, 5 teams submitted final results to Track 1 and 4 teams submitted final results to Track 2. The submitted models are varied and push the boundaries of the current state-of-the-art in sparse neural rendering. A detailed description of all models developed in the challenge is provided in this paper.
Cite
Text
Nazarczuk et al. "AIM 2024 Sparse Neural Rendering Challenge: Methods and Results." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91856-8_2Markdown
[Nazarczuk et al. "AIM 2024 Sparse Neural Rendering Challenge: Methods and Results." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/nazarczuk2024eccvw-aim/) doi:10.1007/978-3-031-91856-8_2BibTeX
@inproceedings{nazarczuk2024eccvw-aim,
title = {{AIM 2024 Sparse Neural Rendering Challenge: Methods and Results}},
author = {Nazarczuk, Michal and Catley-Chandar, Sibi and Tanay, Thomas and Shaw, Richard and Pérez-Pellitero, Eduardo and Timofte, Radu and Yan, Xing and Wang, Pan and Guo, Yali and Wu, Yongxin and Cai, Youcheng and Yang, Yanan and Li, Junting and Zhou, Yanghong and Mok, P. Y. and He, Zongqi and Xiao, Zhe and Chan, Kin-Chung and Goshu, Hana Lebeta and Yang, Cuixin and Dong, Rongkang and Xiao, Jun and Lam, Kin-Man and Hao, Jiayao and Gao, Qiong and Zu, Yanyan and Zhang, Junpei and Jiao, Licheng and Liu, Xu and Purohit, Kuldeep},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {18-35},
doi = {10.1007/978-3-031-91856-8_2},
url = {https://mlanthology.org/eccvw/2024/nazarczuk2024eccvw-aim/}
}