Implicit Bias of Mirror Flow on Separable Data

Abstract

We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised ‘at infinity’ and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $\phi_\infty$-maximum margin classifier. The function $\phi_\infty$ is the horizon function of the mirror potential and characterises its shape ‘at infinity’. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.

Cite

Text

Pesme et al. "Implicit Bias of Mirror Flow on Separable Data." Neural Information Processing Systems, 2024. doi:10.52202/079017-3624

Markdown

[Pesme et al. "Implicit Bias of Mirror Flow on Separable Data." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/pesme2024neurips-implicit/) doi:10.52202/079017-3624

BibTeX

@inproceedings{pesme2024neurips-implicit,
  title     = {{Implicit Bias of Mirror Flow on Separable Data}},
  author    = {Pesme, Scott and Dragomir, Radu-Alexandru and Flammarion, Nicolas},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3624},
  url       = {https://mlanthology.org/neurips/2024/pesme2024neurips-implicit/}
}