One of the most challenging problems in solving optimization problems with evolutionary algorithms and other optimization heuristics is the selection of the control parameters that determine their behavior. In state-of-the-art heuristics, several control parameters need to be set, and their setting typically has an important impact on the performance of the algorithm. For example, in evolutionary algorithms, we typically need to chose the population size, the mutation strength, the crossover rate, the selective pressure, etc.
Two principal approaches to the parameter selection problem exist:
(1) parameter tuning, which asks to find parameters that are most suitable for the problem instances at hand, and
(2) parameter control, which aims to identify good parameter settings “on the fly”, i.e., during the optimization itself.
Parameter control has the advantage that no prior training is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.
While parameter control is indispensable in continuous optimization, it is far from being well-established in discrete optimization heuristics. The ambition of this tutorial is therefore to change this situation, by informing participants about different parameter control techniques, and by discussing both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices.
Our tutorial addresses experimentally as well as theory-oriented researchers alike, and requires only basic knowledge of optimization heuristics.