Building Sparse Deep Feedforward Networks Using Tree Receptive Fields
Abstract
Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.
Cite
Text
Li et al. "Building Sparse Deep Feedforward Networks Using Tree Receptive Fields." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/700Markdown
[Li et al. "Building Sparse Deep Feedforward Networks Using Tree Receptive Fields." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/li2018ijcai-building/) doi:10.24963/IJCAI.2018/700BibTeX
@inproceedings{li2018ijcai-building,
title = {{Building Sparse Deep Feedforward Networks Using Tree Receptive Fields}},
author = {Li, Xiaopeng and Chen, Zhourong and Zhang, Nevin L.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {5045-5051},
doi = {10.24963/IJCAI.2018/700},
url = {https://mlanthology.org/ijcai/2018/li2018ijcai-building/}
}