Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Numerical Instability in metrics.py #87

Open
DebasmitaGhose opened this issue Aug 6, 2020 · 1 comment
Open

Numerical Instability in metrics.py #87

DebasmitaGhose opened this issue Aug 6, 2020 · 1 comment

Comments

@DebasmitaGhose
Copy link

When I use metrics.py to evaluate a model using the same weight, I get different mIoU values for different runs.

I am using your DeepLab implementation as a backbone in another network and also using your evaluation code
Below are 3 such runs, where metrics.py has been used to evaluate the model on the same validation set, using the same weights.

RUN 1

> 'Pixel Accuracy': 0.891,   
> 'Mean Accuracy': 0.755,  
> 'Frequency Weighted IoU': 0.810,  
> 'Mean IoU': 0.615, 

RUN 2


> 'Pixel Accuracy': 0.896, 
> 'Mean Accuracy': 0.761,
>  'Frequency Weighted IoU': 0.819, 
> 'Mean IoU': 0.622, 

RUN 3


>    "Pixel Accuracy": 0.882
>    "Mean Accuracy": 0.748,
>    "Frequency Weighted IoU": 0.798,
>    "Mean IoU": 0.609,


seems like its an issue of numerical instability.
Particularly, I feel that either the _fast_hist function or the division in scores function in utils/metric.py file is the root cause.

Will greatly appreciate if you can provide some help here
thank you!

@kazuto1011
Copy link
Owner

Is the output of your network consistent for each run? Let me make sure the inference is deterministic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants