Both are ensembles of weak algorithms, each of which performs slightly above random
XGB monotonic constraint allows setting requirements on feature and response relation in terms decreasing or increasing
For instance, in a loan approval model, you might have prior knowledge that as the income of an applicant increases, the likelihood of loan default should not increase. In this case, you’d expect the relationship between income and default probability to be monotonically decreasing. With XGBoost, you can enforce this constraint.
boosting has difficulties with datasets where features have high cardinality like many unique id numbers in some enterprise applications or else
table file.inlinks, filter(file.outlinks, (x) => !contains(string(x), ".jpg") AND !contains(string(x), ".pdf") AND !contains(string(x), ".png")) as "Outlinks" from [[]] and !outgoing([[]]) AND -"Changelog"