Learning Continuous Environment Fields via Implicit Functions

Abstract

We propose a novel scene representation that encodes reaching distance -- the distance between any position in the scene to a goal along a feasible trajectory. We demonstrate that this environment field representation can directly guide the dynamic behaviors of agents in 2D mazes or 3D indoor scenes. Our environment field is a continuous representation and learned via a neural implicit function using discretely sampled training data. We showcase its application for agent navigation in 2D mazes, and human trajectory prediction in 3D indoor environments. To produce physically plausible and natural trajectories for humans, we additionally learn a generative model that predicts regions where humans commonly appear, and enforce the environment field to be defined within such regions. Extensive experiments demonstrate that the proposed method can generate both feasible and plausible trajectories efficiently and accurately.

Cite

Text

Li et al. "Learning Continuous Environment Fields via Implicit Functions." International Conference on Learning Representations, 2022.

Markdown

[Li et al. "Learning Continuous Environment Fields via Implicit Functions." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/li2022iclr-learning-a/)

BibTeX

@inproceedings{li2022iclr-learning-a,
  title     = {{Learning Continuous Environment Fields via Implicit Functions}},
  author    = {Li, Xueting and De Mello, Shalini and Wang, Xiaolong and Yang, Ming-Hsuan and Kautz, Jan and Liu, Sifei},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/li2022iclr-learning-a/}
}