Modeling 3D Shapes by Reinforcement Learning

Abstract

We explore how to enable machines to model 3D shapes like human modelers using deep reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework.

Cite

Text

Lin et al. "Modeling 3D Shapes by Reinforcement Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58607-2_32

Markdown

[Lin et al. "Modeling 3D Shapes by Reinforcement Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/lin2020eccv-modeling/) doi:10.1007/978-3-030-58607-2_32

BibTeX

@inproceedings{lin2020eccv-modeling,
  title     = {{Modeling 3D Shapes by Reinforcement Learning}},
  author    = {Lin, Cheng and Fan, Tingxiang and Wang, Wenping and Nießner, Matthias},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58607-2_32},
  url       = {https://mlanthology.org/eccv/2020/lin2020eccv-modeling/}
}