MASIF: Meta-Learned Algorithm Selection Using Implicit Fidelity Information
Abstract
Selecting a well-performing algorithm for a given task or dataset can be time-consuming and tedious, but is crucial for the successful day-to-day business of developing new AI & ML applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging meta-information about previous tasks. However, most of the available AS methods are error-prone because they characterize a task by either cheap-to-compute properties of the dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend the classical AS data setup to include multi-fidelity information and empirically demonstrate how meta-learning on algorithms’ learning behaviour allows us to exploit cheap test-time evidence effectively and combat myopia significantly. We further postulate a budget-regret trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret online evidence on a task in form of varying-length learning curves without any parametric assumption by leveraging a transformer-based encoder. This opens up new possibilities for guided rapid prototyping in data science on cheaply observed partial learning curves.
Cite
Text
Ruhkopf et al. "MASIF: Meta-Learned Algorithm Selection Using Implicit Fidelity Information." Transactions on Machine Learning Research, 2023.Markdown
[Ruhkopf et al. "MASIF: Meta-Learned Algorithm Selection Using Implicit Fidelity Information." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/ruhkopf2023tmlr-masif/)BibTeX
@article{ruhkopf2023tmlr-masif,
title = {{MASIF: Meta-Learned Algorithm Selection Using Implicit Fidelity Information}},
author = {Ruhkopf, Tim and Mohan, Aditya and Deng, Difan and Tornede, Alexander and Hutter, Frank and Lindauer, Marius},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/ruhkopf2023tmlr-masif/}
}