-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PatchInferer with AvgMerger and filter_fn leads to NaNs #7743
Comments
Hi @nicholas-greig, could you please share a small piece of code that I can reproduce the issue? Thanks. |
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
On master currently, when using the PatchInferer class with an AvgMerger (the default Merger class), and a filter_fn, the counts will be zero everywhere the filter_fn filters a region. Then, when the AvgMerger.finalize() is called, the self.values attr of AvgMerger is in-place divided by the self.counts tensor. This is an issue, since the self.counts tensor is initialised to zero, and div by zero causes NaNs. So, everywhere that a filter_fn successfully filters a region, we get NaN outputs.
A quick inplace assignment to counts (to set counts to 1, for example), will set all of these values to zero after this inplace division, but if the output is supposed to be real valued/continuous, it might be better to inplace overwrite these values to be the smallest value possible (using
torch.finfo(self.values.dtype).min
or something similar). Monkey patching the outputs from an Inferer isn't the best situation, since a network can produce NaNs due to weights exploding or overflow during training, and masking this with by overwriting NaNs to zero would merely obfuscate that problem.The text was updated successfully, but these errors were encountered: