A Free-Energy Principle for Representation Learning
Abstract
We employ a formal connection of machine learning with thermodynamics to characterize the quality of learnt representations for transfer learning. We discuss how information-theoretic functionals such as rate, distortion and classification loss of a model lie on a convex, so-called equilibrium surface. We prescribe dynamical processes to traverse this surface under constraints, e.g., an iso-classification process that trades off rate and distortion to keep the classification loss unchanged. We demonstrate how this process can be used for transferring representations from a source dataset to a target dataset while keeping the classification loss constant. Experimental validation of the theoretical results is provided on standard image-classification datasets.
Cite
Text
Gao and Chaudhari. "A Free-Energy Principle for Representation Learning." ICLR 2020 Workshops: DeepDiffEq, 2020.Markdown
[Gao and Chaudhari. "A Free-Energy Principle for Representation Learning." ICLR 2020 Workshops: DeepDiffEq, 2020.](https://mlanthology.org/iclrw/2020/gao2020iclrw-freeenergy/)BibTeX
@inproceedings{gao2020iclrw-freeenergy,
title = {{A Free-Energy Principle for Representation Learning}},
author = {Gao, Yansong and Chaudhari, Pratik},
booktitle = {ICLR 2020 Workshops: DeepDiffEq},
year = {2020},
url = {https://mlanthology.org/iclrw/2020/gao2020iclrw-freeenergy/}
}