Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes

Abstract

In this paper we democratise 3D content creation enabling precise generation of 3D shapes from abstract sketches while overcoming limitations tied to drawing skills. We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence. Leveraging the same part-level decoder our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions eliminating the need for a dataset pairing human sketches and 3D shapes. Additionally our method introduces a seamless in-position editing process as a byproduct of cross-modal part-aligned modelling. Operating in a low-dimensional implicit space our approach significantly reduces computational demands and processing time.

Cite

Text

Bandyopadhyay et al. "Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00935

Markdown

[Bandyopadhyay et al. "Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/bandyopadhyay2024cvpr-doodle/) doi:10.1109/CVPR52733.2024.00935

BibTeX

@inproceedings{bandyopadhyay2024cvpr-doodle,
  title     = {{Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes}},
  author    = {Bandyopadhyay, Hmrishav and Koley, Subhadeep and Das, Ayan and Bhunia, Ayan Kumar and Sain, Aneeshan and Chowdhury, Pinaki Nath and Xiang, Tao and Song, Yi-Zhe},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {9795-9805},
  doi       = {10.1109/CVPR52733.2024.00935},
  url       = {https://mlanthology.org/cvpr/2024/bandyopadhyay2024cvpr-doodle/}
}