Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help needed on reproducing the performance on Cifar-100 #14

Open
wishforgood opened this issue Dec 20, 2017 · 3 comments
Open

Help needed on reproducing the performance on Cifar-100 #14

wishforgood opened this issue Dec 20, 2017 · 3 comments

Comments

@wishforgood
Copy link

wishforgood commented Dec 20, 2017

I used the default setting(which I think is Densenet-12-BC with data augmentation) on cifar-100(via just changing the name of dataset class and the nClasses variable). The training curve looks like this:
image
Though the training has not ended yet, from training curves for other networks on Cifar-100 I can tell there would be no more major changes in acc. The highest of acc for now is 75.59%, which can only match the reported performance of Densenet-12(depth 40) with data augmentation.
Has any one tested this repo on Cifar-100 yet?

@wishforgood
Copy link
Author

image
No changes in the end.

@ZhenyF
Copy link

ZhenyF commented Jun 14, 2018

Hi @wishforgood
I have tried another reimplementation and meet the same problem. The error rate on CIFAR10 with densenet40-nonBC is only 6.0% (5.24% in the repo) but when I test it on Tensorflow it is about 5.4%.
I think it is caused by Pytorch instead of the model
Have you solved that yet?

@wishforgood
Copy link
Author

Not yet, at last I decided to try other models like wide-resnet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants