
Gradient boosting builds trees sequentially — each new tree corrects the errors of the previous ones. It's the dominant algorithm for structured/tabular data.
Each new model focuses on the examples the previous models got wrong. Combines many weak learners into a very strong one.
Structured/tabular data. When you need the best possible accuracy and can spend time tuning hyperparameters.
Reference:
TaskLoco™ — The Sticky Note GOAT