Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic

Abstract

The widespread use of artificial intelligence requires finding energy-efficient paradigms for the field. We propose to reduce the energy consumption of Gaussian process regression using low-precision floating-point representations. We explore how low-precision representations impact the results of Gaussian process regression and how data set properties, implementation approach, model performance, and energy consumption interact. Our findings show that a well-conditioned kernel matrix allows reducing the energy consumption by up to 89.01% for 98.08% of arithmetic operations with little to no impact on model performance. Our findings are relevant whenever one needs to invert a symmetric full-rank matrix.

Cite

Text

Alder and Herbrich. "Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic." International Conference on Machine Learning, 2024.

Markdown

[Alder and Herbrich. "Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/alder2024icml-energyefficient/)

BibTeX

@inproceedings{alder2024icml-energyefficient,
  title     = {{Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic}},
  author    = {Alder, Nicolas and Herbrich, Ralf},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {955-975},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/alder2024icml-energyefficient/}
}