Robust Error Bounds for Quantised and Pruned Neural Networks
Abstract
A new focus in machine learning is concerned with understanding the issues faced with imple- menting neural networks on low-cost and memory-limited hardware, for example smart phones. This approach falls under the umbrella of “decentralised” learning and, compared to the “cen- tralised” case where data is collected and acted upon by a large server held offline, offers greater privacy protection and a faster reaction speed to incoming data . However, when neural networks are implemented on limited hardware there are no guarantees that their outputs will not be signifi- cantly corrupted. This problem is addressed in this talk where a semi-definite program is introduced to robustly bound the error induced by implementing neural networks on limited hardware. The method can be applied to generic neural networks and is able to account for the many nonlinearities of the problem. It is hoped that the computed bounds will give certainty to software/control/ML engineers implementing these algorithms efficiently on limited hardware.
Cite
Text
Li et al. "Robust Error Bounds for Quantised and Pruned Neural Networks." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.Markdown
[Li et al. "Robust Error Bounds for Quantised and Pruned Neural Networks." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.](https://mlanthology.org/l4dc/2021/li2021l4dc-robust/)BibTeX
@inproceedings{li2021l4dc-robust,
title = {{Robust Error Bounds for Quantised and Pruned Neural Networks}},
author = {Li, Jiaqi and Drummond, Ross and Duncan, Stephen R.},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
year = {2021},
pages = {361-372},
volume = {144},
url = {https://mlanthology.org/l4dc/2021/li2021l4dc-robust/}
}