In benchmarking theory, creating a comprehensive and uniformly distributed set of problems is a crucial first step to designing a good benchmark. However, this step is also one of the hardest, as it can be difficult to determine how to evaluate the quality of the chosen problem set.<br/>
In this article, we evaluate if the field of exploratory landscape analysis can be used to develop a generalized method of visualizing a set of arbitrary optimization functions. We present a method for visually determining the distribution of problems within a benchmark set using exploratory landscape analysis combined with clustering and t-sne visualization, and evaluate and explain the visualization this methodology produces.<br/>
The proposed method is evaluated on a set of benchmark problems taken from two well known state-of-the-art real-parameter single objective optimization benchmarks: the CEC Special Sessions and Competitions on Real-Parameter Single Objective optimization, and the GECCO Black-Box Optimization Benchmark workshops.<br/>
The main goal of this paper is to present an analysis of how exploratory landscape analysis can be used to visualize a benchmark problem set. We show that this method can provide a clear visualization of a benchmark problem set and shows the similarities of the problems in it by placing similar problems visually close together. We also show that the problem sets of the above benchmarks have a somewhat distinct set of problems that do not overlap.<br/>
In addition, by applying feature selection approaches we show that a number of landscape features provided by state-of-the-art exploratory landscape analysis libraries are redundant and that a large amount of them are not invariant to simple transforms like scaling and shifting, at least when analyzing these two datasets.