-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validation accuracy discrepancy #97
Comments
Technically and most time in my practices, they should be the same... |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey! I was testing some of the checkpoints that I've obtained during training with an offline script, and the validation accuracies seems to be consistently lower than the values presented throughout training and saved in the checkpoint filename. For example:
cfp_fp_epoch_10_batch_20000_0.993286.h5
I have this checkpoint, which represents the best validation checkpoint for CFP-FP. I expected the accuracy to be 0.993286 as presented in the filename. However when I run:
full_h5 = r'..\checkpoints\cfp_fp_epoch_10_batch_20000_0.993286.h5'
bb = load_model(full_h5)
eea = evals.eval_callback(lambda imms: bb(imms[:, :, :, ::-1]), r'C:\development\VBMatching\RecTool_FinalFix\cfp_fp.bin', batch_size=32)
eea.on_epoch_end()
The ouput accuracy is 0.990000. This is a consistent behaviour to every checkpoint I've tested so far. Can there be some discrepancy between how this is done during training and how I'm trying to replicate it offline?
The text was updated successfully, but these errors were encountered: