On the Evaluation of (Meta-)solver Approaches
Abstract
Meta-solver approaches exploit many individual solvers to potentially build a better solver. To assess the performance of meta-solvers, one can adopt the metrics typically used for individual solvers (e.g., runtime or solution quality) or employ more specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual best performance). In this paper, based on some recently published works, we provide an overview of different performance metrics for evaluating (meta-)solvers by exposing their strengths and weaknesses.
Cite
Text
Amadini et al. "On the Evaluation of (Meta-)solver Approaches." Journal of Artificial Intelligence Research, 2023. doi:10.1613/JAIR.1.14102Markdown
[Amadini et al. "On the Evaluation of (Meta-)solver Approaches." Journal of Artificial Intelligence Research, 2023.](https://mlanthology.org/jair/2023/amadini2023jair-evaluation/) doi:10.1613/JAIR.1.14102BibTeX
@article{amadini2023jair-evaluation,
title = {{On the Evaluation of (Meta-)solver Approaches}},
author = {Amadini, Roberto and Gabbrielli, Maurizio and Liu, Tong and Mauro, Jacopo},
journal = {Journal of Artificial Intelligence Research},
year = {2023},
pages = {705-719},
doi = {10.1613/JAIR.1.14102},
volume = {76},
url = {https://mlanthology.org/jair/2023/amadini2023jair-evaluation/}
}