DeepExplainer outputs shap_values all zeros explaining an AE model's latent difference loss #3349
Unanswered
CandyClass
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I try to use SHAP to explain the importance of input features towards the difference loss between two latent outputs.
There are two inputs (size = (num, 17000) and (num, 4100)) in the original net, the net act as an AE to translate two inputs to themself. A loss is used to make the difference between two latents bigger, and I want to explain this using SHAP. Because SHAP won't recieve two inputs in DeepExplainer, I wrote a copynet class to pack the original net, and along with the SHAP code are as below:
and the shap_values are all zeros, and I don't know what problem it could be.
I've checked the outupt of the copynet, and they differ a lot between different inputs. Also I've tried changing a single value in a sample to zero, and the loss will change too.
Is there any problems in the code, or one concern by me is that the input data are sparse, with about 95% of zeros. But what confuse me is that the output of copynet obviously differ from each other, with positive and negative values either, how can the shap_values stay all zeros?
Beta Was this translation helpful? Give feedback.
All reactions