Automated per-instance configuration and selection of algorithms are gaining significant moments in evolutionary computation in recent years. Two crucial, sometimes implicit, ingredients for these AutoML methods are 1) feature-based representations of the problem instances and 2) performance prediction methods that take these features as input to estimate how well a specific algorithm instance will perform on a given problem instance. Non-surprisingly, common ML models fail to make predictions for instances whose feature-based representation is underrepresented or not covered in the training data, resulting in poor generalization ability of the models for problems not seen during training.
In this work, we study leave-one-problem-out performance prediction. We analyze whether standard random forest (RF) model predictions can be improved by calibrating it with a weighted average of performance values obtained by the algorithm on problem instances that are sufficiently close to the problem for which a performance prediction is sought, measured by cosine similarity in feature space.
While our RF+clust approach obtains more accurate performance prediction for several problems, its predictive power crucially depends on the chosen similarity threshold as well as on the feature portfolio for which the cosine similarity is measured, thereby opening a new angle for feature selection in a zero-shot learning setting, as LOPO is termed in machine learning.