Tactile-Augmented Radiance Fields
Abstract
We present a scene representation that brings vision and touch into a shared 3D space which we call a tactile-augmented radiance field. This representation capitalizes on two key insights: (i) ubiquitous vision-based touch sensors are built on perspective cameras and (ii) visually and structurally similar regions of a scene share the same tactile features. We use these insights to train a conditional diffusion model that provided with an RGB image and a depth map rendered from a neural radiance field generates its corresponding tactile "image". To train this diffusion model we collect the largest collection of spatially-aligned visual and tactile data. Through qualitative and quantitative experiments we demonstrate the accuracy of our cross-modal generative model and the utility of collected and rendered visual-tactile pairs across a range of downstream tasks. Project page: https://dou-yiming.github.io/TaRF
Cite
Text
Dou et al. "Tactile-Augmented Radiance Fields." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02505Markdown
[Dou et al. "Tactile-Augmented Radiance Fields." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/dou2024cvpr-tactileaugmented/) doi:10.1109/CVPR52733.2024.02505BibTeX
@inproceedings{dou2024cvpr-tactileaugmented,
title = {{Tactile-Augmented Radiance Fields}},
author = {Dou, Yiming and Yang, Fengyu and Liu, Yi and Loquercio, Antonio and Owens, Andrew},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {26529-26539},
doi = {10.1109/CVPR52733.2024.02505},
url = {https://mlanthology.org/cvpr/2024/dou2024cvpr-tactileaugmented/}
}