Neural Prior for Trajectory Estimation
Abstract
Neural priors are a promising direction to capture low-level vision statistics without relying on handcrafted regularizers. Recent works have successfully shown the use of neural architecture biases to implicitly regularize image denoising, super-resolution, inpainting, synthesis, scene flow, among others. They do not rely on large-scale datasets to capture prior statistics and thus generalize well to out-of-the-distribution data. Inspired by such advances, we investigate neural priors for trajectory representation. Traditionally, trajectories have been represented by a set of handcrafted bases that have limited expressibility. Here, we propose a neural trajectory prior to capture continuous spatio-temporal information without the need for offline data. We demonstrate how our proposed objective is optimized during runtime to estimate trajectories for two important tasks: Non-Rigid Structure from Motion (NRSfM) and lidar scene flow integration for self-driving scenes. Our results are competitive to many state-of-the-art methods for both tasks.
Cite
Text
Wang et al. "Neural Prior for Trajectory Estimation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00642Markdown
[Wang et al. "Neural Prior for Trajectory Estimation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/wang2022cvpr-neural/) doi:10.1109/CVPR52688.2022.00642BibTeX
@inproceedings{wang2022cvpr-neural,
title = {{Neural Prior for Trajectory Estimation}},
author = {Wang, Chaoyang and Li, Xueqian and Pontes, Jhony Kaesemodel and Lucey, Simon},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {6532-6542},
doi = {10.1109/CVPR52688.2022.00642},
url = {https://mlanthology.org/cvpr/2022/wang2022cvpr-neural/}
}