Impact of Scaling in ELA Feature Calculation on Algorithm Selection Cross-Benchmark Transferability
Authors
G. Cenikj, G. Petelin, T. Eftimov
Publication
IEEE Congress on Evolutionary Computation IEEE CEC 2024
Yokohama, Japan, 1-5 July, 2024
Abstract
Exploratory Landscape Analysis (ELA) features are the most common choice for representing single-objective continuous optimization problem instances in Algorithm Selection (AS) methods. However, ELA features have been shown to have low generalization to unseen problems. Recently, it has also been shown that scaling objective function values before ELA feature calculation can be beneficial for AS methods evaluated on the Black-Box Optimization Benchmarking suite. In this paper, we aim to evaluate whether the same holds true for other benchmarks. In particular, we take into account four different benchmark suites and investigate the ability of an AS model trained on one benchmark to generalize to another benchmark. We also evaluate the impact of scaling objective function values before ELA feature calculation on AS performance. We observe a benefit of scaling objective function values in the case of the use of all ELA features together, however, conflicting outcomes are obtained when different feature groups are used individually. Our analysis shows no benefit of the joint use of the ELA features calculated on the original and scaled objective function values.
BIBTEX copied to Clipboard