Robust Multi-View Representation Learning (Student Abstract)

Abstract

Multi-view data has become ubiquitous, especially with multi-sensor systems like self-driving cars or medical patient-side monitors. We propose two methods to approach robust multi-view representation learning with the aim of leveraging local relationships between views.The first is an extension of Canonical Correlation Analysis (CCA) where we consider multiple one-vs-rest CCA problems, one for each view. We use a group-sparsity penalty to encourage finding local relationships. The second method is a straightforward extension of a multi-view AutoEncoder with view-level drop-out.We demonstrate the effectiveness of these methods in simple synthetic experiments. We also describe heuristics and extensions to improve and/or expand on these methods.

Cite

Text

Venkatesan et al. "Robust Multi-View Representation Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7242

Markdown

[Venkatesan et al. "Robust Multi-View Representation Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/venkatesan2020aaai-robust/) doi:10.1609/AAAI.V34I10.7242

BibTeX

@inproceedings{venkatesan2020aaai-robust,
  title     = {{Robust Multi-View Representation Learning (Student Abstract)}},
  author    = {Venkatesan, Sibi and Miller, James Kyle and Dubrawski, Artur},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {13939-13940},
  doi       = {10.1609/AAAI.V34I10.7242},
  url       = {https://mlanthology.org/aaai/2020/venkatesan2020aaai-robust/}
}