Adaptation-Agnostic Meta-Training

Abstract

Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update. The normal meta-training strategy needs to differentiate through the inner-task adaptation procedure to optimize the meta-parameters. This leads to a constraint that the inner-task algorithms should be solved analytically. Under this constraint, only simple algorithms with analytical solutions can be applied as the inner-task algorithms, limiting the model expressiveness. To lift the limitation, we propose an adaptation-agnostic meta-training strategy. Following our proposed strategy, we are capable to apply stronger algorithms (e.g., an ensemble of different types of algorithms) as the inner-task algorithm to achieve superior performance comparing with popular baselines.

Cite

Text

Chen et al. "Adaptation-Agnostic Meta-Training." ICML 2021 Workshops: AutoML, 2021.

Markdown

[Chen et al. "Adaptation-Agnostic Meta-Training." ICML 2021 Workshops: AutoML, 2021.](https://mlanthology.org/icmlw/2021/chen2021icmlw-adaptationagnostic/)

BibTeX

@inproceedings{chen2021icmlw-adaptationagnostic,
  title     = {{Adaptation-Agnostic Meta-Training}},
  author    = {Chen, Jiaxin and Zhan, Li-Ming and Wu, Xiao-Ming and Chung, Fu-lai},
  booktitle = {ICML 2021 Workshops: AutoML},
  year      = {2021},
  url       = {https://mlanthology.org/icmlw/2021/chen2021icmlw-adaptationagnostic/}
}