In this paper, we propose an extension of a recently proposed Deep Statistical Comparison (DSC) approach, called practical Deep Statistical Comparison (pDSC), which takes into account practical significance when making a statistical comparison of meta-heuristic stochastic optimization algorithms for single-objective optimization. For achieving practical significance, two variants of the standard DSC ranking scheme are proposed. The first is called sequential pDSC, and takes into account practical significance by preprocessing of the independent optimization runs in a sequential order. The second is called Monte Carlo pDSC, and avoids any dependency of practical significance with regard to the ordering of optimization runs. The analysis of identifying practical significance on benchmark tests for single-objective problems, shows that for some cases, both variants of pDSC compared to the Chess Rating System for Evolutionary Algorithms (CRS4EAs) approach give different conclusions. Preprocessing for practical significance is carried out in a similar way, but there are cases when the conclusion for practical significance differ, which comes from the different statistical concepts used to identify practical significance.