Crafting Efficient Neural Graph of Large Entropy
Abstract
Network pruning is widely applied to deep CNN models due to their heavy computation costs and achieves high performance by keeping important weights while removing the redundancy. Pruning redundant weights directly may hurt global information flow, which suggests that an efficient sparse network should take graph properties into account. Thus, instead of paying more attention to preserving important weight, we focus on the pruned architecture itself. We propose to use graph entropy as the measurement, which shows useful properties to craft high-quality neural graphs and enables us to propose efficient algorithm to construct them as the initial network architecture. Our algorithm can be easily implemented and deployed to different popular CNN models and achieve better trade-offs.
Cite
Text
Dong et al. "Crafting Efficient Neural Graph of Large Entropy." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/311Markdown
[Dong et al. "Crafting Efficient Neural Graph of Large Entropy." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/dong2019ijcai-crafting/) doi:10.24963/IJCAI.2019/311BibTeX
@inproceedings{dong2019ijcai-crafting,
title = {{Crafting Efficient Neural Graph of Large Entropy}},
author = {Dong, Minjing and Chen, Hanting and Wang, Yunhe and Xu, Chang},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {2244-2250},
doi = {10.24963/IJCAI.2019/311},
url = {https://mlanthology.org/ijcai/2019/dong2019ijcai-crafting/}
}