A Touch, Vision, and Language Dataset for Multimodal Alignment

Abstract

Touch is an important sensing modality for humans, but it has not yet been incorporated into a multimodal generative language model. This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language descriptions. As a step towards bridging that gap, this work introduces a new dataset of 44K in-the-wild visiontouch pairs, with English language labels annotated by humans (10%) and textual pseudo-labels from GPT-4V (90%). We use this dataset to train a vision-language-aligned tactile encoder for open-vocabulary classification and a touch-visionlanguage (TVL) model for text generation using the trained encoder. Results suggest that by incorporating touch, the TVL model improves (+29% classification accuracy) tactile-vision-language alignment over existing models trained on any pair of those modalities. Although only a small fraction of the dataset is human labeled, the TVL model demonstrates improved visual-tactile understanding over GPT-4V (+12%) and open-source vision-language models (+32%) on a new touch-vision understanding benchmark. Code, checkpoints and data are available on https: //tactile-vlm.github.io.

Cite

Text

Fu et al. "A Touch, Vision, and Language Dataset for Multimodal Alignment." International Conference on Machine Learning, 2024.

Markdown

[Fu et al. "A Touch, Vision, and Language Dataset for Multimodal Alignment." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/fu2024icml-touch/)

BibTeX

@inproceedings{fu2024icml-touch,
  title     = {{A Touch, Vision, and Language Dataset for Multimodal Alignment}},
  author    = {Fu, Letian and Datta, Gaurav and Huang, Huang and Panitch, William Chung-Ho and Drake, Jaimyn and Ortiz, Joseph and Mukadam, Mustafa and Lambeta, Mike and Calandra, Roberto and Goldberg, Ken},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {14080-14101},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/fu2024icml-touch/}
}