Alzheimer’s disease is still a field of research with lots of open questions. The complexity of the disease prevents the early diagnosis before visible symptoms regarding the individual’s cognitive capabilities occur. This research presents an in-depth analysis of a huge data set encompassing medical, cognitive and lifestyle’s measurements from more than 12,000 individuals. Several hypotheses were established whose validity has been questioned considering the obtained results. The importance of appropriate experimental design is highly stressed in the research. Thus, a sequence of methods for handling missing data, redundancy, data imbalance, and correlation analysis has been applied for appropriate preprocessing of the data set, and consequently XGBoost model has been trained and evaluated with special attention to the hyperparameters tuning. The model was explained by using the Shapley values produced by the SHAP method. XGBoost produced a f1-score of 0.84 and as such is considered to be highly competitive among those published in the literature. This achievement, however, was not the main contribution of this paper. This research’s goal was to perform global and local interpretability of the intelligent model and derive valuable conclusions from the established hypothesis. Those methods led to a single scheme that presents either a positive or, negative influence on the values of each of the features whose importance has been confirmed by means of Shapley values. This scheme might be considered as an additional source of knowledge for physicians and other experts whose concern is the exact diagnosis of early-stage of Alzheimer’s disease. The conclusions derived from the intelligent model’s data-driven interpretability confronted all the established hypotheses. This research clearly showed the importance of an explainable Machine learning approach that opens the black box and clearly unveils the relationships between the features and the diagnoses.