Task Switching Network for Multi-Task Learning
Abstract
We introduce Task Switching Networks (TSNs), a task-conditioned architecture with a single unified encoder/decoder for efficient multi-task learning. Multiple tasks are performed by switching between them, performing one task at a time. TSNs have a constant number of parameters irrespective of the number of tasks. This scalable yet conceptually simple approach circumvents the overhead and intricacy of task-specific network components in existing works. In fact, we demonstrate for the first time that multi-tasking can be performed with a single task-conditioned decoder. We achieve this by learning task-specific conditioning parameters through a jointly trained task embedding network, encouraging constructive interaction between tasks. Experiments validate the effectiveness of our approach, achieving state-of-the-art results on two challenging multi-task benchmarks, PASCAL-Context and NYUD. Our analysis of the learned task embeddings further indicates a connection to task relationships studied in the recent literature.
Cite
Text
Sun et al. "Task Switching Network for Multi-Task Learning." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00818Markdown
[Sun et al. "Task Switching Network for Multi-Task Learning." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/sun2021iccv-task/) doi:10.1109/ICCV48922.2021.00818BibTeX
@inproceedings{sun2021iccv-task,
title = {{Task Switching Network for Multi-Task Learning}},
author = {Sun, Guolei and Probst, Thomas and Paudel, Danda Pani and Popović, Nikola and Kanakis, Menelaos and Patel, Jagruti and Dai, Dengxin and Van Gool, Luc},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {8291-8300},
doi = {10.1109/ICCV48922.2021.00818},
url = {https://mlanthology.org/iccv/2021/sun2021iccv-task/}
}