How Far Out of Distribution Can We Go With ELA Features and Still Be Able to Rank Algorithms?
G. Petelin, G. Cenikj
2023 IEEE Symposium Series on Computational Intelligence IEEE SSCI
Mexico City, Mexico, 5 - 8 December , 2023
Algorithm selection is a critical aspect of continuous black-box optimization, and various methods have been proposed to choose the most appropriate algorithm for a given problem. One commonly used approach involves employing Exploratory Landscape Analysis (ELA) features to represent optimization functions and training a machine-learning meta-model to perform algorithm selections based on these features. However, many meta-models trained on existing benchmarks suffer from limited generalizability. When faced with a new optimization function, these meta-models often struggle to select the most suitable algorithm, restricting their practical application. In this study, we investigate the generalizability of meta-models when tested on previously unseen functions that were not observed during training. Specifically, we train a meta-model on base COmparing Continuous Optimizers (COCO) functions and evaluate its performance on new functions derived as affine combinations between pairs of the base functions. Our findings demonstrate that the task of ranking algorithms becomes substantially more challenging when the functions differ from those encountered during meta-learning training. This indicates that the effectiveness of algorithm selection diminishes when confronted with problem instances that substantially deviate from the training distribution. In such scenarios, meta-models that use ELA features to predict algorithm ranks do not outperform mere predictions of the average algorithm ranks.
BIBTEX copied to Clipboard