The Special Sessions and Competitions on Real-Parameter Single Objective Optimization are benchmarking competitions held every year since 2013 that are used to evaluate the performance of new optimization algorithms.One flaw of these competitions is that algorithms are compared only to other algorithms submitted in the same year, not with algorithms submitted in previous years of the competition, so it can make comparison between all algorithms troublesome. Almost every year uses different benchmark functions, so the results between the years are not directly comparable. As a result, the winner of the most recent competition might not necessarily be significantly better than the winners of previous years.In this article, we directly compare winners of every competition held from 2013 to 2018 and present the results of this comparison. We use a benchmark set that consists of all test functions used by the competition throughout these years. We compare them on benchmark functions grouped by dimension (10, 30, 50, 100) and by year (2013, 2014, 2015, 2017). This allows us on one hand to see which algorithms perform best at specific dimensions, while grouping by year shows effects of parameter tuning on the end results.We present the results of these comparisons and find that later competition winners are not statistically better than algorithms from previous years in a general sense on every problem and dimensionality.