TransOptAS: Transformer-Based Algorithm Selection for Single-Objective Optimization
Authors
G. Cenikj, G. Petelin, T. Eftimov
Publication
The Genetic and Evolutionary Computation Conference GECCO 2024
Melbourne, Australia, 14 - 18 July, 2024
Abstract
Driven by the success of transformer models in representation learning across different Machine Learning fields, this study explores the development of a transformer-based Algorithm Selection (AS) model. We demonstrate the training of a transformer architecture for AS on a collection of single-objective problem instances generated through a random function generator. The transformer model predicts a numeric performance indicator for all of the algorithms in a given algorithm portfolio, relying solely on raw samples from the objective functions. The AS model is evaluated on two different algorithm portfolios, and two different problem dimensions. In both scenarios, transformers consistently outperform a single best solver, indicating that AS can be performed without the need for intermediate feature construction. A comparison to the classic Exploratory Landscape Analysis (ELA) approach shows that the transformer model provides comparable results, with the ELA model commonly outperforming the transformer model with differences in the range 0.02-0.05. We observe complimentarity in the predictions produced by the ELA and transformer models, indicating a possible benefit of combining both approaches.
BIBTEX copied to Clipboard