Selecting the final model This is the final part of the module ‘Decision Trees and Ensemble Learning – Part 14.’ This time, we revisit the best model of each type and evaluate their performance on the validation data. Based on these evaluations, we will select the overall best model and train it on the fullContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 14”
Category Archives: ML-Zoomcamp
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 13
XGBoost parameter tuning – Part 2/2 This is the second part about XGBoost parameter tuning. In the first part, we tuned the first parameter – ‘eta‘. Now we will explore parameter tuning for ‘max_depth‘ and ‘min_child_weight‘. Finally, we’ll train the final model. Tuning max_depth max_depth=6 Now that we’ve set ‘eta‘ to 0.1, which we determinedContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 13”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 12
XGBoost parameter tuning – Part 1/2 This part is about XGBoost parameter tuning. It’s the first part of a two-part series, where we begin by tuning the initial parameter – ‘eta‘. The subsequent article will explore parameter tuning for ‘max_depth‘ and ‘min_child_weight‘. In the final phase, we’ll train the final model. Let’s start tuning theContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 12”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 11
Gradient boosting and XGBoost – Part 2/2 This is part 2 of ‘Gradient boosting and XGBoost.’ In the first part, we compared random forests and gradient boosting, followed by the installation of XGBoost and training our first XGBoost model. In this chapter, we delve into performance monitoring. Performance Monitoring In XGBoost, it’s feasible to monitorContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 11”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 10
Gradient boosting and XGBoost – Part 1/2 This time, we delve into a different approach for combining decision trees, where models are trained sequentially, with each new model correcting the errors of the previous one. This method of model combination is known as boosting. We will specifically explore gradient boosting and utilize the XGBoost library,Continue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 10”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 9
Ensemble and random forest This article discusses the concept of Random Forest as a technique for combining multiple decision trees. Before diving into Random Forest, we’ll explore the concept of ensemble modeling, where multiple models act as a ‘board of experts.’ The final part of the article will cover the process of tuning a RandomContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 9”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 8
Decision trees parameter tuning Part 8 of the ‘Decision Trees and Ensemble Learning’ section is dedicated to tuning decision tree parameters. Parameter tuning involves selecting the best parameters for training. In this context, ‘tuning’ means choosing parameters in a way that maximizes or minimizes a chosen performance metric (such as AUC or RMSE) on theContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 8”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 7
Decision Tree Learning Algorithm – Part 2/2 In part 1/2 of “Decision Tree Learning Algorithm,” we delved into a simple example to understand how a decision tree learns rules. We used the misclassification rate as a means to evaluate the accuracy of our predictions. Additionally, we discussed that the misclassification rate is just one wayContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 7”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 6
Decision Tree Learning Algorithm – Part 1/2 Before we dive deeper into parameter tuning, which is the topic of Part 8 of this chapter (Decision Trees and Ensemble Learning), let’s take a step back for a moment and explore how a decision tree can generate rules, as we’ve seen in the last article. In thisContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 6”
ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 5
The is part 2 of Decision Trees. While part 1 introduces the concept of a Decision Tree briefly, this section is about overfitting a decision tree and how to control the size of a tree. Decision Trees – Part 2/2 Let’s look back at the performance of our trained Decision Tree from part 1. OverfittingContinue reading “ML Zoomcamp 2023 – Decision Trees and Ensemble Learning– Part 5”