Skip to content

Latest commit

 

History

History
149 lines (118 loc) · 11 KB

ensembles.md

File metadata and controls

149 lines (118 loc) · 11 KB

Ensembles

  1. (good) review on voting, bagging, boosting stacking, cascading methodologies
  2. How to combine several sklearn algorithms into a voting ensemble
  3. Stacking api, MLXTEND
  4. Machine learning Mastery on
    1. stacking neural nets - really good
      1. Stacked Generalization Ensemble
      2. Multi-Class Classification Problem
      3. Multilayer Perceptron Model
      4. Train and Save Sub-Models
      5. Separate Stacking Model
      6. Integrated Stacking Model
    2. How to Combine Predictions for Ensemble Learning
      1. Plurality Voting.
      2. Majority Voting.
      3. Unanimous Voting.
      4. Weighted Voting.
    3. Essence of Stacking Ensembles for Machine Learning
      1. Voting Ensembles
      2. Weighted Average
      3. Blending Ensemble
      4. Super Learner Ensemble
    4. Dynamic Ensemble Selection (DES) for Classification in Python - Dynamic Ensemble Selection algorithms operate much like DCS algorithms, except predictions are made using votes from multiple classifier models instead of a single best model. In effect, each region of the input feature space is owned by a subset of models that perform best in that region.
      1. k-Nearest Neighbor Oracle (KNORA) With Scikit-Learn
        1. KNORA-Eliminate (KNORA-E)
        2. KNORA-Union (KNORA-U)
      2. Hyperparameter Tuning for KNORA
        1. Explore k in k-Nearest Neighbor
        2. Explore Algorithms for Classifier Pool
    5. A Gentle Introduction to Mixture of Experts Ensembles
      1. Mixture of Experts
        1. Subtasks
        2. Expert Models
        3. Gating Model
        4. Pooling Method
      2. Relationship With Other Techniques
        1. Mixture of Experts and Decision Trees
        2. Mixture of Experts and Stacking
    6. Strong Learners vs. Weak Learners in Ensemble Learning - Weak learners are models that perform slightly better than random guessing. Strong learners are models that have arbitrarily good accuracy.
      Weak and strong learners are tools from computational learning theory and provide the basis for the development of the boosting class of ensemble methods.
  5. Vidhya on trees, bagging boosting, gbm, xgb
  6. Parallel grad boost treest
  7. A comprehensive guide to ensembles read! (samuel jefroykin)
    1. Basic Ensemble Techniques
    2. 2.1 Max Voting
    3. 2.2 Averaging
    4. 2.3 Weighted Average
    5. Advanced Ensemble Techniques
    6. 3.1 Stacking
    7. 3.2 Blending
    8. 3.3 Bagging
    9. 3.4 Boosting
    10. Algorithms based on Bagging and Boosting
    11. 4.1 Bagging meta-estimator
    12. 4.2 Random Forest
    13. 4.3 AdaBoost
    14. 4.4 GBM
    15. 4.5 XGB
    16. 4.6 Light GBM
    17. 4.7 CatBoost
  8. Kaggler guide to stacking
  9. Blending vs stacking
  10. Kaggle ensemble guide

- bagging (random sample selection, multi classifier training), random forest (random feature selection for each tree, multi tree training), boosting(creating stumps, each new stump tries to fix the previous error, at last combining results using new data, each model is assigned a skill weight and accounted for in the end), voting(majority vote, any set of algorithms within weka, results combined via mean or some other way), stacking(same as voting but combining predictions using a meta model is used).

BAGGING - bootstrap aggregating

Bagging - best example so far, create m bags, put n’<n samples (60% of n) in each bag - with replacement which means that the same sample can be selected twice or more, query from the test (x) each of the m models, calculate mean, this is the classification.

Overfitting - not an issue with bagging, as the mean of the models actually averages or smoothes the “curves”. Even if all of them are overfitted.

BOOSTING

Mastery on using all the boosting algorithms: Gradient Boosting with Scikit-Learn, XGBoost, LightGBM, and CatBoost\

Adaboost: similar to bagging, create a system that chooses from samples that were modelled poorly before.

  1. create bag_1 with n’ features <n with replacement, create the model_1, test on ALL train.
  2. Create bag_2 with n’ features with replacement, but add a bias for selecting from the samples that were wrongly classified by the model_1. Create a model_2. Average results from model_1 and model_2. I.e., who was classified correctly or not.
  3. Create bag_3 with n’ features with replacement, but add a bias for selecting from the samples that were wrongly classified by the model_1+2. Create a model_3. Average results from model_1, 2 & 3 I.e., who was classified correctly or not. Iterate onward.
  4. Create bag_m with n’ features with replacement, but add a bias for selecting from the samples that were wrongly classified by the previous steps.

XGBOOST

R Installation in Weka, then XGBOOST in weka through R

Parameters for weka mlr class.xgboost.

  • https://cran.r-project.org/web/packages/xgboost/xgboost.pdf
  • Here is an example configuration for multi-class classification:
  • weka.classifiers.mlr.MLRClassifier -learner “nrounds = 10, max_depth = 2, eta = 0.5, nthread = 2”
  • classif.xgboost -params "nrounds = 1000, max_depth = 4, eta = 0.05, nthread = 5, objective = \"multi:softprob\"

Copy: nrounds = 10, max_depth = 2, eta = 0.5, nthread = 2\

Special case of random forest using XGBOOST:\

#Random Forest™ - 1000 trees
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")

#Boosting - 3 rounds
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, nrounds = 3, objective = "binary:logistic")

RF1000: - max_depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, nthread = 2

XG: nrounds = 10, max_depth = 4, eta = 0.5, nthread = 2

Gradient Boosting Classifier

  1. Loss functions and GBC vs XGB
  2. Why is XGB faster than SK GBC
  3. Good XGB vs GBC tutorial
  4. XGB vs GBC

CatBoost

  1. (great) what is so special?
  2. the fastest algo
  3. a new game in ML
  4. use it here is why