Less-Energy-Usage Network with Batch Power Iteration

Abstract

Large scale neural networks are among the mainstream tools of modern big data analytic. But their training and inference phase are accompanied by huge energy consumption and carbon footprint. The energy efficiency, running time complexity and model storage size are three major considerations of using deep neural networks in modern applications. Here we introduce Less-Energy-Usage Network, or LEAN. Different from regular network compression (e.g. pruning and knowledge distillation) that transform a pre-trained huge network to a smaller network, our method is to build a lean and effective network during training phase. It is based on spectral theory and batch power iteration learning. This technique can be applied to almost any type of neural networks to reduce their sizes. Preliminary experiment results show that our LEAN consumes 30% less energy, achieving 95% of the baseline accuracy with 1.5X speed-up and 90% less parameters compared against the baseline CNN model

Cite

Text

Huang et al. "Less-Energy-Usage Network with Batch Power Iteration." ICML 2023 Workshops: NCW, 2023.

Markdown

[Huang et al. "Less-Energy-Usage Network with Batch Power Iteration." ICML 2023 Workshops: NCW, 2023.](https://mlanthology.org/icmlw/2023/huang2023icmlw-lessenergyusage/)

BibTeX

@inproceedings{huang2023icmlw-lessenergyusage,
  title     = {{Less-Energy-Usage Network with Batch Power Iteration}},
  author    = {Huang, Hao and Shah, Tapan and Evans, Scott C and Yoo, Shinjae},
  booktitle = {ICML 2023 Workshops: NCW},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/huang2023icmlw-lessenergyusage/}
}