The by far most common approach towards understanding and developing optimization heuristics are analyses of the algorithms in the performance space. Some researchers also consider behavior in the decision space. It is well understood, however, that the interplay between decision and performance space is the most critical. Rigorously analyzing these relationships, however, is a tedious and complex task. A key technique developed to support such analyses is fitness landscape analysis (FLA). FLA aims to characterize properties of an optimization problem through sets of features, which measure different characteristics such as its degree of separability, its multimodality, etc.
Per-instance automated algorithm selection and configuration techniques use FLA to train meta-models which aim to predict which algorithm or which configuration works well on a given problem instance. FLA-based per-instance selection and configuration have shown promising performances for a number of classical optimization problems, including SAT solving, AI planning, etc.
In the context of black-box optimization, FLA requires the approximation of feature values through a number of samples. Key design questions in this context concern the selection of meaningful features, their efficient computation, the number of samples required to obtain reliable approximations, the distribution of these samples, the possibility to use algorithms’ trajectory data for feature computation, and many more. Research addressing these questions is subsumed under the term “exploratory landscape analysis” (ELA). In ELA, a large number of different features have been proposed, which raise up the need of feature selection, since many features can be highly correlated and have a decremental impact on understanding of the underlying recommendations. This is where representation learning comes into play. Representation learning has its most important applications in machine learning, where bias and redundancies in data can have severe effects on performance. It focuses on methods that automatically learn new data representations (i.e., feature engineering) using the raw data needed to improve the performance of machine learning tasks. Representation learning methods are also successfully used to reduce the dimension of the data, via automatically detecting correlations.
In this special session, we are particularly interested in studying how representation learning can contribute to improved performance and to a better understanding of ELA-based analyses, e.g., by automatically reducing bias, correlations and redundancies in the feature data.
All submissions should follow the CEC2021 submission guidelines provided at IEEE CEC 2021 Submission Website. Special session papers are treated the same as regular conference papers. Please specify that your paper is for the Special Session on RepL4Opt: Representation Learning meets Meta-heuristic Optimization. All papers accepted and presented at CEC 2021 will be included in the conference proceedings published by IEEE Explore.
In order to participate to this special session, full or student registration of CEC 2021 is needed.
Computer Systems Depratment
Jožef Stefan Institue, Slovenia
CNRS, LIP6
Sorbonne University, France
Computer Systems Depratment
Jožef Stefan Institue, Slovenia