Automated algorithm performance prediction in numerical black-box optimization often relies on problem characterizations, such as exploratory landscape analysis features, which describe optimization problem properties. These features are typically used as inputs to machine learning models and are represented in a tabular format. However, such approaches often overlook algorithm configurations, a critical factor influencing performance. The relationships between algorithm operators, parameters, problem characteristics, and performance outcomes form a complex structure best represented as a graph.
This work explores the use of heterogeneous graph data structures and graph neural networks to predict optimization algorithm performance by capturing the complex dependencies between problems, algorithm configurations, and performance outcomes. We focus on two modular frameworks, modCMA-ES and modDE, which decompose two widely used derivative-free optimization algorithms: the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). These frameworks enable a systematic representation of algorithm variants. We evaluate 324 modCMA-ES and 576 modDE variants on 24 BBOB problems across six runtime budgets and two problem dimensions.
Achieving up to 36.6% improvement in MSE and 7.41% in R^2 over traditional tabular-based methods, this work demonstrates the effectiveness of graph representations and opens a promising new research direction at the intersection of optimization and geometric learning.