Benchmarking Footprints of Continuous Black-Box Optimization Algorithms: Explainable Insights into Algorithm Success and Failure
Authors
A. Nikolikj, M. A. Muñoz, T. Eftimov
Publication
Swarm and Evolutionary Computation, 2025
Abstract
The practices for comparing black-box optimization algorithms based on performance statistics over a benchmark suite are being increasingly criticized. Critics argue that these practices fail to explain why particular algorithms outperform others. Consequently, there is a growing demand for more robust comparison methods that assess the overall efficiency of the algorithms in terms of performance and also consider the specific landscape properties of the optimization problems on which the algorithms are compared. This study introduces a novel approach for comparing algorithms based on the concept of an algorithm footprint, which aims to identify easy and challenging problem instances for a given algorithm. A unique footprint is assigned to each algorithm and then compared, to highlight problem instances where an algorithm either uniquely succeeds or falls, as well as how the algorithms complement each other across the problem instances. Our solution employs a multi-task regression model (MTR) to simultaneously link the performance of multiple algorithms with the landscape features of the problem instances. By applying an Explainable Machine Learning (XML) technique, we quantify and compare the importance of the landscape features for each algorithm. The methodology is applied to a portfolio of three different BBO algorithms, highlighting their success and failure on the Black-Box Optimization Benchmarking (BBOB) suite. The efficacy of our approach is further demonstrated through a comparative analysis with two existing algorithm comparison methods, showcasing the robustness and depth of insights provided by the proposed approach.
BIBTEX copied to Clipboard