ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 14

Selecting the final model This is the final part of the module ‘Decision Trees and Ensemble Learning – Part 14.’ This time, we revisit the best model of each type and evaluate their performance on the validation data. Based on these evaluations, we will select the overall best model and train it on the fullContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 14”

ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 13

XGBoost parameter tuning – Part 2/2 This is the second part about XGBoost parameter tuning. In the first part, we tuned the first parameter – ‘eta‘. Now we will explore parameter tuning for ‘max_depth‘ and ‘min_child_weight‘. Finally, we’ll train the final model. Tuning max_depth max_depth=6 Now that we’ve set ‘eta‘ to 0.1, which we determinedContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 13”

ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 12

XGBoost parameter tuning – Part 1/2 This part is about XGBoost parameter tuning. It’s the first part of a two-part series, where we begin by tuning the initial parameter – ‘eta‘. The subsequent article will explore parameter tuning for ‘max_depth‘ and ‘min_child_weight‘. In the final phase, we’ll train the final model. Let’s start tuning theContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 12”

ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 11

Gradient boosting and XGBoost – Part 2/2 This is part 2 of ‘Gradient boosting and XGBoost.’ In the first part, we compared random forests and gradient boosting, followed by the installation of XGBoost and training our first XGBoost model. In this chapter, we delve into performance monitoring. Performance Monitoring In XGBoost, it’s feasible to monitorContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 11”

ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 10

Gradient boosting and XGBoost – Part 1/2 This time, we delve into a different approach for combining decision trees, where models are trained sequentially, with each new model correcting the errors of the previous one. This method of model combination is known as boosting. We will specifically explore gradient boosting and utilize the XGBoost library,Continue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 10”