Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation accuracy discrepancy #97

Open
jcsm89 opened this issue Sep 12, 2022 · 1 comment
Open

Validation accuracy discrepancy #97

jcsm89 opened this issue Sep 12, 2022 · 1 comment

Comments

@jcsm89
Copy link

jcsm89 commented Sep 12, 2022

Hey! I was testing some of the checkpoints that I've obtained during training with an offline script, and the validation accuracies seems to be consistently lower than the values presented throughout training and saved in the checkpoint filename. For example:

cfp_fp_epoch_10_batch_20000_0.993286.h5

I have this checkpoint, which represents the best validation checkpoint for CFP-FP. I expected the accuracy to be 0.993286 as presented in the filename. However when I run:

full_h5 = r'..\checkpoints\cfp_fp_epoch_10_batch_20000_0.993286.h5'
bb = load_model(full_h5)
eea = evals.eval_callback(lambda imms: bb(imms[:, :, :, ::-1]), r'C:\development\VBMatching\RecTool_FinalFix\cfp_fp.bin', batch_size=32)
eea.on_epoch_end()

The ouput accuracy is 0.990000. This is a consistent behaviour to every checkpoint I've tested so far. Can there be some discrepancy between how this is done during training and how I'm trying to replicate it offline?

@leondgarse
Copy link
Owner

Technically and most time in my practices, they should be the same...
I'm not sure, why using lambda imms: bb(imms[:, :, :, ::-1]) here? How about the accuracy using simply evals.eval_callback(bb, "xxx/bin")?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants