-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Numerical Instability in metrics.py #87
Comments
Is the output of your network consistent for each run? Let me make sure the inference is deterministic. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When I use
metrics.py
to evaluate a model using the same weight, I get different mIoU values for different runs.I am using your DeepLab implementation as a backbone in another network and also using your evaluation code
Below are 3 such runs, where
metrics.py
has been used to evaluate the model on the same validation set, using the same weights.RUN 1
RUN 2
RUN 3
seems like its an issue of numerical instability.
Particularly, I feel that either the
_fast_hist
function or the division inscores
function in utils/metric.py file is the root cause.Will greatly appreciate if you can provide some help here
thank you!
The text was updated successfully, but these errors were encountered: