Multi-Modal Relation Distillation for Unified 3D Representation Learning

Abstract

Recent advancements in multi-modal pre-training for 3D point clouds have demonstrated promising results by aligning heterogeneous features across 3D shapes and their corresponding 2D images and language descriptions. However, current straightforward solutions often overlook intricate structural relations among samples, potentially limiting the full capabilities of multi-modal learning. To address this issue, we introduce Multi-modal Relation Distillation (MRD), a tri-modal pre-training framework, which is designed to effectively distill reputable large Vision-Language Models (VLM) into 3D backbones. MRD aims to capture both intra-relations within each modality as well as cross-relations between different modalities and produce more discriminative 3D shape representations. Notably, MRD achieves significant improvements in downstream zero-shot classification tasks and cross-modality retrieval tasks, delivering new state-of-the-art performance.

Cite

Text

Wang et al. "Multi-Modal Relation Distillation for Unified 3D Representation Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73414-4_21

Markdown

[Wang et al. "Multi-Modal Relation Distillation for Unified 3D Representation Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/wang2024eccv-multimodal/) doi:10.1007/978-3-031-73414-4_21

BibTeX

@inproceedings{wang2024eccv-multimodal,
  title     = {{Multi-Modal Relation Distillation for Unified 3D Representation Learning}},
  author    = {Wang, Huiqun and Bao, Yiping and Pan, Panwang and Li, Zeming and Liu, Xiao and Yang, Ruijie and Huang, Di},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73414-4_21},
  url       = {https://mlanthology.org/eccv/2024/wang2024eccv-multimodal/}
}