In this paper a novel approach for making a statistical comparison of meta-heuristic stochastic optimization algorithms over multiple single-objective problems is introduced, where a new ranking scheme is proposed to obtain data for multiple problems. The main contribution of this approach is that the ranking scheme is based on the whole distribution, instead of using only one statistic to describe the distribution, such as average or median. Averages are sensitive to outliers (i.e. the poor runs of the stochastic optimization algorithms) and consequently medians are sometimes used. However, using the common approach with either averages or medians, the results can be affected by the ranking scheme that is used by some standard statistical tests. This happens when the differences between the averages or medians are in some ϵ-neighborhood and the algorithms obtain different ranks though they should be ranked equally given the small differences that exist between them. The experimental results obtained on Black-Box Benchmarking 2015, show that our approach gives more robust results compared to the common approach in cases when the results are affected by outliers or by a misleading ranking scheme.