Entropy-based regularization of AdaBoost

  • Michał Bereta


In this study, we introduce an entropy-based method to regularize the AdaBoost algorithm. The AdaBoost algorithm is a well-known algorithm used to create aggregated classifiers. In many real-world classification problems in addition to paying special attention classification accuracy of the final classifier, great focus is placed on tuning the number of the so-called weak learners, which are aggregated by the final (strong) classifier. The proposed method is able to improve the AdaBoost algorithm in terms of both criteria. While many approaches to the regularization of boosting algorithms can be complicated, the proposed method is straightforward and easy to implement. We compare the results of the proposed method (En- tropyAdaBoost) with the original AdaBoost and also with its regularized version, ǫ-AdaBoost on several classification problems. It is shown that the proposed methods of EntropyAdaBoost and ǫ-AdaBoost are strongly complementary when the improvement of AdaBoost is considered.


AdaBoost, regularization, entropy, EntropyAdaBoost,
Dec 22, 2017
How to Cite
BERETA, Michał. Entropy-based regularization of AdaBoost. Computer Assisted Methods in Engineering and Science, [S.l.], v. 24, n. 2, p. 89–100, dec. 2017. ISSN 2299-3649. Available at: <http://cames.ippt.gov.pl/index.php/cames/article/view/206>. Date accessed: 22 july 2018.