You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the documentation, it says that "the bias term is the expected output of the model over the training dataset".
In my case I use XGBoost for binary classification , meaning that it is the log odds ratio that is to be explained.
If I use treeSHAP I get -1.3906583786011 for the bias term, while the mean log odds ratio for the training set is -2.3586306966. Hence, I do not understand how the bias term is computed.
If it matters, I have used XGBoost in R and computed the Shapley values using
treeShaps= predict(xgbModel,testset,predcontrib = TRUE)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
In the documentation, it says that "the bias term is the expected output of the model over the training dataset".
In my case I use XGBoost for binary classification , meaning that it is the log odds ratio that is to be explained.
If I use treeSHAP I get -1.3906583786011 for the bias term, while the mean log odds ratio for the training set is -2.3586306966. Hence, I do not understand how the bias term is computed.
If it matters, I have used XGBoost in R and computed the Shapley values using
treeShaps= predict(xgbModel,testset,predcontrib = TRUE)
Any help would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions