You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand that SHAP values are obtained by averaging the marginal contribution of some feature $i$ to all possible combinations (or "coalitions") of features $S$, but how are "all possible coalitions" of features considered?
Given the fact that the concept behind SHAP values is based on cooperative game theory, calculating the SHAP value for a feature $i$ based on permutations of features within the set $S$ and the set $S\cup i$ is given by:
However, in the domain of ML, since (most) models are not sensitive to the order of features fed into them (i.e., sequence in which we introduce a feature doesn't matter), the use of permutations seems redundant because it will generate redundant outputs from the ML model due to its insensitivity to feature order.
Consequently, this would make the SHAP value calculation computationally more expensive as compared to considering all possible combinations of features instead rather than permutations.
So, how are the SHAP values being calculated in reality?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I understand that SHAP values are obtained by averaging the marginal contribution of some feature$i$ to all possible combinations (or "coalitions") of features $S$ , but how are "all possible coalitions" of features considered?$i$ based on permutations of features within the set $S$ and the set $S\cup i$ is given by:
Given the fact that the concept behind SHAP values is based on cooperative game theory, calculating the SHAP value for a feature
However, in the domain of ML, since (most) models are not sensitive to the order of features fed into them (i.e., sequence in which we introduce a feature doesn't matter), the use of permutations seems redundant because it will generate redundant outputs from the ML model due to its insensitivity to feature order.
Consequently, this would make the SHAP value calculation computationally more expensive as compared to considering all possible combinations of features instead rather than permutations.
So, how are the SHAP values being calculated in reality?
Beta Was this translation helpful? Give feedback.
All reactions