Learning-to-Learn Non-Convex Piecewise-Lipschitz Functions

Abstract

We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms. Starting from recent regret bounds for the exponential forecaster on losses with dispersed discontinuities, we generalize them to be initialization-dependent and then use this result to propose a practical meta-learning procedure that learns both the initialization and the step-size of the algorithm from multiple online learning tasks. Asymptotically, we guarantee that the average regret across tasks scales with a natural notion of task-similarity that measures the amount of overlap between near-optimal regions of different tasks. Finally, we instantiate the method and its guarantee in two important settings: robust meta-learning and multi-task data-driven algorithm design.

Cite

Text

Balcan et al. "Learning-to-Learn Non-Convex Piecewise-Lipschitz Functions." Neural Information Processing Systems, 2021.

Markdown

[Balcan et al. "Learning-to-Learn Non-Convex Piecewise-Lipschitz Functions." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/balcan2021neurips-learningtolearn/)

BibTeX

@inproceedings{balcan2021neurips-learningtolearn,
  title     = {{Learning-to-Learn Non-Convex Piecewise-Lipschitz Functions}},
  author    = {Balcan, Maria-Florina F and Khodak, Mikhail and Sharma, Dravyansh and Talwalkar, Ameet},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/balcan2021neurips-learningtolearn/}
}