TRITON: Neural Neural Textures for Better Sim2Real

Abstract

Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results. We present a new approach that combines differentiable rendering with image translation to achieve temporal consistency over indefinite timescales, using surface consistency losses and neu- ral neural textures. We call this algorithm TRITON (Texture Recovering Image Translation Network): an unsupervised, end-to-end, stateless sim2real algorithm that leverages the underlying 3D geometry of input scenes by generating realistic- looking learnable neural textures. By settling on a particular texture for the objects in a scene, we ensure consistency between frames statelessly. TRITON is not lim- ited to camera movements — it can handle the movement and deformation of ob- jects as well, making it useful for downstream tasks such as robotic manipulation. We demonstrate the superiority of our approach both qualitatively and quantita- tively, using robotic experiments and comparisons to ground truth photographs. We show that TRITON generates more useful images than other algorithms do. Please see our project website: tritonpaper.github.io

Cite

Text

Burgert et al. "TRITON: Neural Neural Textures for Better Sim2Real." Conference on Robot Learning, 2022.

Markdown

[Burgert et al. "TRITON: Neural Neural Textures for Better Sim2Real." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/burgert2022corl-triton/)

BibTeX

@inproceedings{burgert2022corl-triton,
  title     = {{TRITON: Neural Neural Textures for Better Sim2Real}},
  author    = {Burgert, Ryan D. and Shang, Jinghuan and Li, Xiang and Ryoo, Michael S.},
  booktitle = {Conference on Robot Learning},
  year      = {2022},
  pages     = {2215-2225},
  volume    = {205},
  url       = {https://mlanthology.org/corl/2022/burgert2022corl-triton/}
}