In this study, we examine the generalization capabilities of algorithm selection (AS) models across diverse benchmark suites in single-objective numerical optimization, leveraging Exploratory Landscape Analysis (ELA) and transformer-based (TransOpt) features. Additionally, we investigate the reasons behind the lack of generalization. Our findings highlight the challenges in capturing algorithm performance through problem landscape features, and indicate the need for further efforts in AS to achieve reliable generalization.