Information Transfer in Multitask Learning, Data Augmentation, and Beyond

Abstract

A hallmark of human intelligence is that we continue to learn new information and then extrapolate the learned information onto new tasks and domains (see, e.g., Thrun and Pratt (1998)). While this is a fairly intuitive observation, formulating such ideas has proved to be a challenging research problem and continues to inspire new studies. Recently, there has been increasing interest in AI/ML about building models that generalize across tasks, even when they have some form of distribution shifts. How can we ground this research in a solid framework to develop principled methods for better practice? This talk will present my recent works addressing this research question. My talk will involve three parts: revisiting multitask learning from the lens of deep learning theory, designing principled methods for robust transfer, and algorithmic implications for data augmentation.

Cite

Text

Zhang. "Information Transfer in Multitask Learning, Data Augmentation, and Beyond." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26831

Markdown

[Zhang. "Information Transfer in Multitask Learning, Data Augmentation, and Beyond." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/zhang2023aaai-information/) doi:10.1609/AAAI.V37I13.26831

BibTeX

@inproceedings{zhang2023aaai-information,
  title     = {{Information Transfer in Multitask Learning, Data Augmentation, and Beyond}},
  author    = {Zhang, Hongyang R.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {15464},
  doi       = {10.1609/AAAI.V37I13.26831},
  url       = {https://mlanthology.org/aaai/2023/zhang2023aaai-information/}
}