Nowadays, going across different research disciplines and industry sectors, many real-world problems involve continuous single-objective optimization problems for which solutions cannot be computed in analytical or semi-analytical form by using deterministic algorithms. To find optimisation results on such problems, Evolutionary Computation (EC) exists, a subfield of Computational Intelligence (CI), where the main research task is to develop algorithms for global optimisation inspired by biological evolution. To show algorithms’ novelty, researchers apply benchmarking theory through which they analyse the algorithms performances by using some quality performance measure on a collection of benchmark problems. Even the great success achieved in developing such algorithms and their efficiency in solving many application problems, the main problem that is still open is that incremental research in developing such algorithms has been done through the years, where it has been shown that recently developed algorithms might not necessarily be statistically significantly better than algorithms developed from previous years. </br>
Benchmarking in EC is a crucial task and is used to evaluate the performance of an algorithm against other algorithms. Benchmarking theory involves three main questions: i) which problems to choose for benchmarking, ii) how to design the experiment, and iii) how to evaluate performance. The focus of the proposed project will be on methodologies used for a proper unbiased selection of optimisation problems, which will be used for further analysis. </br>
Even though great success and huge steps have been achieved by proposing more robust statistical methodologies for benchmarking in EC addressing the third benchmarking question, the selection of the benchmark problems that will be included in the analysis can have a huge impact on the experimental design (the second questions) and statistical analysis performed using the performance data (the third question). Evaluating the same algorithm portfolio (i.e., set of algorithms) on different sets of benchmark problems can result in different winning algorithms. This means that the selection of the benchmark problems can lead to bias performance analysis (i.e., selecting benchmark problems in favor of the winning algorithm). So this allows researchers to present results that make their newly developed algorithm look superior against the others. Even more different selections of the problems can lead to different configurations (i.e., parameters) of the same algorithm that perform the best, decreasing the generalisation and transferability of the obtained results.</br>
The main objective of the RESPONSE project is to reduce the benchmarking bias in EC, by inventing, developing, implementing, and evaluating a framework for in-depth optimisation landscape analysis, which will consists of methodologies that will explore the expressiveness and robustness of the landscape characteristics of the problems in order to find useful problem representations (i.e., feature portfolio). The RESPONSE framework will lead to not reinventing the wheel in developing novel black-box continuous optimisation algorithms and will decrease duplication of efforts already done. The methodologies for in-depth optimisation landscape analysis will be developed based on a synergism between representation learning, machine learning, and statistics. Their development is highly motivated by the continuous growth of industrial optimization problems, which requires transferability of the gained knowledge from benchmarking studies into industry. Having a new industrial optimization problem that should be solved, based on its landscape representation (i.e., characteristics), we can find the most similar already existing problem(s) for which we have knowledge about which algorithm performs the best. Further, this kind of knowledge (i.e., past experimental data) allows us to apply meta-learning approaches in order to find the most suitable algorithm for new unseen industrial problems (i.e., algorithm selection problem).