##plugins.themes.bootstrap3.article.main##

Ahmed Najat Ahmed

Abstract

In machine learning, high-dimensional data classification is one of the significant challenges. For dimensionality reduction, the traditional classifiers enhance the variety of classifiers. The conventional method has some restrictions; information loss is caused during dimensionality reduction, minimizing accuracy. The selection of the sample is vulnerable to redundant features and noise. The proposed method, Hybrid Dimensionality Reduction Forest (HDRF) with Random Forest (RF) ensemble classifier and kappa measure, were used to overcome those restrictions. Initially, the Kappa measure is used for pruning, and the higher degree is selected from the forest. For partitioning the features, a tree-based selection method is used. Principal Component Analysis (PCA) is used for feature extraction, noise reduction, and dimensionality reduction. The proposed method removes the weak classifiers and eliminates redundancy. Also, it reduces the unselected structures and the fundamental structures into a new system. The evaluation results on 25 high-dimensional data, the proposed method outperforms with Random Forest ensemble classifier methods and provides enhanced results obtained on 21 out of 25.

Downloads

Download data is not yet available.

##plugins.themes.bootstrap3.article.details##

Section
Articles

How to Cite

Ahmed Najat Ahmed. (2021). Reduction in High-Dimensional Data by using HDRF with Random Forest Classifier . QALAAI ZANIST JOURNAL, 6(4), 876–889. https://doi.org/10.25212/lfu.qzj.6.4.30

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.