222k views
4 votes
Which of the following hyperparameter(s) is/are common in AdaBoost, Gradient Boost, and XGBoost?

A. random_state
B. learning_rate
C. subsample
D. n_estimators

1 Answer

1 vote

Final answer:

The hyperparameters common in AdaBoost, Gradient Boost, and XGBoost are learning_rate, n_estimators, and subsample.

Step-by-step explanation:

The hyperparameters common in AdaBoost, Gradient Boost, and XGBoost are:

  • learning_rate - This hyperparameter controls the contribution of each weak learner in the ensemble. A smaller learning rate makes the model more resistant to overfitting.
  • n_estimators - This hyperparameter represents the number of weak learners (decision trees) in the ensemble. Increasing the number of estimators can improve the performance of the model, but also increase computation time.
  • subsample - This hyperparameter controls the fraction of samples used for training each weak learner. It helps introduce randomness and can prevent overfitting.

Therefore, options B (learning_rate), C (subsample), and D (n_estimators) are the hyperparameters common to all three boosting algorithms.

User Ozgur Oz
by
7.2k points