Doubly Sparsifying Network
Abstract
We propose the doubly sparsifying network (DSN), by drawing inspirations from the double sparsity model for dictionary learning. DSN emphasizes the joint utilization of both the problem structure and the parameter structure. It simultaneously sparsifies the output features and the learned model parameters, under one unified framework. DSN enjoys intuitive model interpretation, compact model size and low complexity. We compare DSN against a few carefully-designed baselines, to verify its consistently superior performance in a wide range of settings. Encouraged by its robustness to insufficient training data, we explore the applicability of DSN in brain signal processing that has been a challenging interdisciplinary area. DSN is evaluated for two mainstream tasks, electroencephalographic (EEG) signal classification and blood oxygenation level dependent (BOLD) response prediction, both achieving promising results.
Cite
Text
Wang et al. "Doubly Sparsifying Network." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/421Markdown
[Wang et al. "Doubly Sparsifying Network." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/wang2017ijcai-doubly/) doi:10.24963/IJCAI.2017/421BibTeX
@inproceedings{wang2017ijcai-doubly,
title = {{Doubly Sparsifying Network}},
author = {Wang, Zhangyang and Huang, Shuai and Zhou, Jiayu and Huang, Thomas S.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {3020-3026},
doi = {10.24963/IJCAI.2017/421},
url = {https://mlanthology.org/ijcai/2017/wang2017ijcai-doubly/}
}