Investigating the Impact of Min-Max Data Normalization on the Regression Performance of K-Nearest Neighbor with Different Similarity Measurements

Muhammad Ali, Peshawa J. (2022) Investigating the Impact of Min-Max Data Normalization on the Regression Performance of K-Nearest Neighbor with Different Similarity Measurements. ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY, 10 (1). pp. 85-91. ISSN 2410-9355

[img] Text (PDF File)
ARO.10955- Vol10.No1.2022.ISSUE18-PP85-91.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (1MB)
Official URL: https://aro.koyauniversity.org/

Abstract

K-nearest neighbor (KNN) is a lazy supervised learning algorithm, which depends on computing the similarity between the target and the closest neighbor(s). On the other hand, min-max normalization has been reported as a useful method for eliminating the impact of inconsistent ranges among attributes on the efficiency of some machine learning models. The impact of min-max normalization on the performance of KNN models is still not clear, and it needs more investigation. Therefore, this research examines the impacts of the min-max normalization method on the regression performance of KNN models utilizing eight different similarity measures, which are City block, Euclidean, Chebychev, Cosine, Correlation, Hamming, Jaccard, and Mahalanobis. Five benchmark datasets have been used to test the accuracy of the KNN models with the original dataset and the normalized dataset. Mean squared error (MSE) has been utilized as a performance indicator to compare the results. It’s been concluded that the impact of min-max normalization on the KNN models utilizing City block, Euclidean, Chebychev, Cosine, and Correlation depends on the nature of the dataset itself, therefore, testing models on both original and normalized datasets are recommended. The performance of KNN models utilizing Hamming, Jaccard, and Mahalanobis makes no difference by adopting min-max normalization because of their ratio nature, and dataset covariance involvement in the similarity calculations. Results showed that Mahalanobis outperformed the other seven similarity measures. This research is better than its peers in terms of reliability, and quality because it depended on testing different datasets from different application fields.

Item Type: Article
Uncontrolled Keywords: K-nearest neighbor, Min-max, Normalization, Similarity, Mahalanobis
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Divisions: ARO-The Scientific Journal of Koya University > VOL 10, NO1 (2022)
Depositing User: Dr Salah Ismaeel Yahya
Date Deposited: 30 Aug 2022 10:06
Last Modified: 30 Aug 2022 10:06
URI: http://eprints.koyauniversity.org/id/eprint/317

Actions (login required)

View Item View Item