|
ENSEMBLE CLASSIFIERS FOR HYPERSPECTRAL CLASSIFICATION Host Publication: Finds and Results from the Swedish Cyprus Expedition: A Gender Perspective at the Medelhavsmuseet Authors: J. C-W Chan and F. Canters Publisher: European Association of Remote Sensing Laboratories Publication Date: Apr. 2007
Abstract: Machine learning algorithms are methods developed to deal with large volumes of data with high efficiency. Adaboost has been among the most popular and promising algorithms in the last dec-ade and has demonstrated its potential for classification of remote sensing data. Previous studies have shown that Adaboost, though less stable than bagging (another well-know ensemble classifi-cation algorithm), consistently produces higher accuracies in classification tasks performed in a vast variety of data domains. The use of Adaboost for hyperspectral classification, however, has not been fully explored. Like Adaboost, Random Forest is another bootstrap method proposed recently to generate numerous, up to hundreds of classifiers for classification. Using the same resampling strategy as bagging, Random Forest introduces a new feature, called out-of-bag sam-ples, for feature ranking and evaluation. The only parameter for tuning is the number of features to split on at each node, which is described as insensitive to accuracy. Comparatively, Adaboost does not have any parameters except for the amount of pruning, which is zero when using Ran-dom Forest. In this paper, we compare the results obtained with both classifiers on hyperspectral data. Results from two applications, one on ecotope mapping and one on urban mapping are pre-sented. Compared with using one decision tree classifier, Adaboost increases classification accu-racy by 9%, and Random Forest by 13%. Both classifiers achieve comparable results in terms of overall accuracy. Random Forest, however, due to its use of only a random feature subset and no pruning, is more efficient. Our results show that both Adaboost and Random Forest are exception-ally fast in training and achieve higher accuracies than accurate classifiers such as Multi-Layer Perceptrons. Their limited demands on user's input for parameter tuning makes them ideal algo-rithms for operationally oriented tasks. The study demonstrates that Adaboost and Random Forest perform well with hyperspectral data, in terms of both accuracy and ease-of-use.
|
|