InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior

Abstract

Comprehending natural language instructions is a charming property for 3D indoor scene synthesis systems. Existing methods directly model object joint distributions and express object relations implicitly within a scene, thereby hindering the controllability of generation. We introduce InstructScene, a novel generative framework that integrates a semantic graph prior and a layout decoder to improve controllability and fidelity for 3D scene synthesis. The proposed semantic graph prior jointly learns scene appearances and layout distributions, exhibiting versatility across various downstream tasks in a zero-shot manner. To facilitate the benchmarking for text-driven 3D scene synthesis, we curate a high-quality dataset of scene-instruction pairs with large language and multimodal models. Extensive experimental results reveal that the proposed method surpasses existing state-of-the-art approaches by a large margin. Thorough ablation studies confirm the efficacy of crucial design components. Project page: https://chenguolin.github.io/projects/InstructScene.

Cite

Text

Lin and Mu. "InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior." International Conference on Learning Representations, 2024.

Markdown

[Lin and Mu. "InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/lin2024iclr-instructscene/)

BibTeX

@inproceedings{lin2024iclr-instructscene,
  title     = {{InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior}},
  author    = {Lin, Chenguo and Mu, Yadong},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/lin2024iclr-instructscene/}
}