Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the precision in validation #35

Open
Zwette opened this issue Jun 8, 2019 · 3 comments
Open

Questions about the precision in validation #35

Zwette opened this issue Jun 8, 2019 · 3 comments

Comments

@Zwette
Copy link

Zwette commented Jun 8, 2019

I trained the ResNet architecture (cifar_shakeshake26 in Pytorch version) on cifar-10 dataset with 1000 unlabeled images and 44000 labeled images (the resting 5000 images are used for validation) for about 180 epochs, setting the bach-size 256, labeled batch-size 62.
But I observed that the validation precision (top 1) would first rise from 43% up to 50% and then fall to only 13% (began to fall after about 10 epochs) along the training process. I was so puzzled about this phenomenon. Besides, the precision in training always rise and never fall, why the validation precision would fall??

@lukk47
Copy link

lukk47 commented Aug 13, 2019

@Easquel Have you solved it?
I come across with the same issue. My validation accuracy stays around 60% to the end of training while the training accuracy can go up to 100%.

@lukk47
Copy link

lukk47 commented Aug 15, 2019

@tarvaina May you help the pytorch on cifar10?
I ran the code with pytorch0.3.1. The validation accuracy went up to 50% and then went down to 10~20%.

@Wangzheaos
Copy link

训练了ResNet体

should be 1000 labeled images and 44000 labeled images

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants