Representation Learning for Spatial Multimodal Data Integration with Optimal Transport

Abstract

Spatial sequencing technologies have advanced rapidly in the past few years, and recently multiple modalities of cells -- including mRNA expression, chromatin state, and other molecular modalities -- can be measured with corresponding spatial location in tissue slices. To facilitate scientific discoveries from spatial multi-omics sequencing experiments, methods for integrating multimodal spatial data are critically needed. Here we define the problem of spatial multimodal integration as integrating multiple modalities from related tissue slices into a Common Coordinate Framework (CCF) and learning biological meaningful representations for each spatial location in the CCF. We introduce a novel machine learning framework combining optimal transport and variational autoencoders to solve the spatial multimodal integration problem. Our method outperforms existing single-cell multi-omics integration methods that ignore spatial information. Our method allows researchers to analyze tissues comprehensively by integrating knowledge from spatial slices of multiple modalities.

Cite

Text

Liu and Raphael. "Representation Learning for Spatial Multimodal Data Integration with Optimal Transport." NeurIPS 2023 Workshops: AI4Science, 2023.

Markdown

[Liu and Raphael. "Representation Learning for Spatial Multimodal Data Integration with Optimal Transport." NeurIPS 2023 Workshops: AI4Science, 2023.](https://mlanthology.org/neuripsw/2023/liu2023neuripsw-representation/)

BibTeX

@inproceedings{liu2023neuripsw-representation,
  title     = {{Representation Learning for Spatial Multimodal Data Integration with Optimal Transport}},
  author    = {Liu, Xinhao and Raphael, Benjamin},
  booktitle = {NeurIPS 2023 Workshops: AI4Science},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/liu2023neuripsw-representation/}
}