Efficient Regression with Deep Neural Networks: How Many Datapoints Do We Need?

Abstract

While large datasets facilitate the learning of a robust representation of the data manifold, the ability to obtain similar performance over small datasets is clearly computationally advantageous. This work considers deep neural networks for regression and aims to better understand how to select datapoints to minimize the neural network training time; a particular focus is on gaining insight into the structure and amount of datapoints needed to learn a robust function representation and how the training time varies for deep and wide architectures.

Cite

Text

Lengyel and Borovykh. "Efficient Regression with Deep Neural Networks: How Many Datapoints Do We Need?." NeurIPS 2022 Workshops: HITY, 2022.

Markdown

[Lengyel and Borovykh. "Efficient Regression with Deep Neural Networks: How Many Datapoints Do We Need?." NeurIPS 2022 Workshops: HITY, 2022.](https://mlanthology.org/neuripsw/2022/lengyel2022neuripsw-efficient/)

BibTeX

@inproceedings{lengyel2022neuripsw-efficient,
  title     = {{Efficient Regression with Deep Neural Networks: How Many Datapoints Do We Need?}},
  author    = {Lengyel, Daniel and Borovykh, Anastasia},
  booktitle = {NeurIPS 2022 Workshops: HITY},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/lengyel2022neuripsw-efficient/}
}