233k views
4 votes
After spending several hours, you are now anxious to build a high accuracy model. As a result, you build 5 GBM models, thinking a boosting algorithm would do the magic. Unfortunately, neither of models could perform better than benchmark score. Finally, you decided to combine those models. Though, ensembled models are known to return high accuracy, but you are unfortunate. Where did you miss?

User Frio
by
8.7k points

1 Answer

3 votes

Final answer:

Boosting algorithms like GBM can improve model performance, but combining weak models or using similar features may not lead to improvement in accuracy.

Step-by-step explanation:

Boosting algorithms, such as GBM, are powerful in improving model performance. However, there are several reasons why your ensembled models may not have performed better than the benchmark score:

  1. Weak learners: If the individual GBM models were not strong enough, combining them may not lead to a significant improvement.
  2. Collinearity: If the models were trained on similar features or data, they may have produced similar predictions, resulting in an ineffective ensemble.
  3. Data quality: If the training data is noisy or contains outliers, the models' performance can suffer.

To improve the ensembled model's accuracy, consider enhancing the quality of individual models, diversifying the base models, or addressing any data issues.

User MsrButterfly
by
7.8k points