RGB2Point: 3D Point Cloud Generation from Single RGB Images
Abstract
We introduce RGB2Point an unposed single-view RGB image to a 3D point cloud generation based on Transformer. RGB2Point takes an input image of an object and generates a dense 3D point cloud. Contrary to prior works based on CNN layers and diffusion-denoising approaches we use pre-trained Transformer layers that are fast and generate high-quality point clouds with consistent quality over available categories. Our generated point clouds demonstrate high quality on a real-world dataset as evidenced by improved Chamfer distance (51.15%) and Earth Mover's distance (36.17%) metrics compared to the current state-of-the-art. Additionally our approach shows a better quality on a synthetic dataset achieving better Chamfer distance (39.26%) Earth Mover's distance (26.95%) and F-score (47.16%). Moreover our method produces 63.1% more consistent high-quality results across various object categories compared to prior works. Furthermore RGB2Point is computationally efficient requiring only 2.3GB of VRAM to reconstruct a 3D point cloud from a single RGB image and our implementation generates the results 15133x faster than a SOTA diffusion-based model.
Cite
Text
Lee and Benes. "RGB2Point: 3D Point Cloud Generation from Single RGB Images." Winter Conference on Applications of Computer Vision, 2025.Markdown
[Lee and Benes. "RGB2Point: 3D Point Cloud Generation from Single RGB Images." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/lee2025wacv-rgb2point/)BibTeX
@inproceedings{lee2025wacv-rgb2point,
title = {{RGB2Point: 3D Point Cloud Generation from Single RGB Images}},
author = {Lee, Jae Joong and Benes, Bedrich},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2025},
pages = {2952-2962},
url = {https://mlanthology.org/wacv/2025/lee2025wacv-rgb2point/}
}