Streaming Batch Gradient Tracking for Neural Network Training (Student Abstract)

Abstract

Faster and more energy efficient hardware accelerators are critical for machine learning on very large datasets. The energy cost of performing vector-matrix multiplication and repeatedly moving neural network models in and out of memory motivates a search for alternative hardware and algorithms. We propose to use streaming batch principal component analysis (SBPCA) to compress batch data during training by using a rank-k approximation of the total batch update. This approach yields comparable training performance to minibatch gradient descent (MBGD) at the same batch size while reducing overall memory and compute requirements.

Cite

Text

Huang et al. "Streaming Batch Gradient Tracking for Neural Network Training (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7178

Markdown

[Huang et al. "Streaming Batch Gradient Tracking for Neural Network Training (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/huang2020aaai-streaming/) doi:10.1609/AAAI.V34I10.7178

BibTeX

@inproceedings{huang2020aaai-streaming,
  title     = {{Streaming Batch Gradient Tracking for Neural Network Training (Student Abstract)}},
  author    = {Huang, Siyuan and Hoskins, Brian D. and Daniels, Matthew W. and Stiles, Mark D. and Adam, Gina C.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {13813-13814},
  doi       = {10.1609/AAAI.V34I10.7178},
  url       = {https://mlanthology.org/aaai/2020/huang2020aaai-streaming/}
}