-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about the precision in validation #35
Comments
@Easquel Have you solved it? |
@tarvaina May you help the pytorch on cifar10? |
should be 1000 labeled images and 44000 labeled images |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I trained the ResNet architecture (cifar_shakeshake26 in Pytorch version) on cifar-10 dataset with 1000 unlabeled images and 44000 labeled images (the resting 5000 images are used for validation) for about 180 epochs, setting the bach-size 256, labeled batch-size 62.
But I observed that the validation precision (top 1) would first rise from 43% up to 50% and then fall to only 13% (began to fall after about 10 epochs) along the training process. I was so puzzled about this phenomenon. Besides, the precision in training always rise and never fall, why the validation precision would fall??
The text was updated successfully, but these errors were encountered: