Self-Supervised Representation Learning for CAD

Abstract

Virtually every object in the modern world was created, modified, analyzed and optimized using computer aided design (CAD) tools. An active CAD research area is the use of data-driven machine learning methods to learn from the massive repositories of geometric and program representations. However, the lack of labeled data in CAD's native format, i.e., the parametric boundary representation (B-Rep), poses an obstacle at present difficult to overcome. Several datasets of mechanical parts in B-Rep format have recently been released for machine learning research. However, large-scale databases are mostly unlabeled, and labeled datasets are small. Additionally, task-specific label sets are rare and costly to annotate. This work proposes to leverage unlabeled CAD geometry on supervised learning tasks. We learn a novel, hybrid implicit/explicit surface representation for B-Rep geometry. Further, we show that this pre-training both significantly improves few-shot learning performance and achieves state-of-the-art performance on several current B-Rep benchmarks.

Cite

Text

Jones et al. "Self-Supervised Representation Learning for CAD." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02043

Markdown

[Jones et al. "Self-Supervised Representation Learning for CAD." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/jones2023cvpr-selfsupervised/) doi:10.1109/CVPR52729.2023.02043

BibTeX

@inproceedings{jones2023cvpr-selfsupervised,
  title     = {{Self-Supervised Representation Learning for CAD}},
  author    = {Jones, Benjamin T. and Hu, Michael and Kodnongbua, Milin and Kim, Vladimir G. and Schulz, Adriana},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {21327-21336},
  doi       = {10.1109/CVPR52729.2023.02043},
  url       = {https://mlanthology.org/cvpr/2023/jones2023cvpr-selfsupervised/}
}