Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion of test result?(about test validation set split) #60

Open
Liudzz opened this issue May 26, 2021 · 0 comments
Open

Confusion of test result?(about test validation set split) #60

Liudzz opened this issue May 26, 2021 · 0 comments

Comments

@Liudzz
Copy link

Liudzz commented May 26, 2021

Firstly, thanks for sharing so many models on cifar100 which is very friendly to beginner ! I have learned CNN for 2 years and your project can always inspire me !

I'd like you to ask you a question. In your implement train code , do you use test set to detemine your best model? If so, all the results got from best train model ? I have learned that datasets alway need to be divide into ,"train/test/val",3 folds.
And what also confused me is the accs on Imagenet .Do they use val set to detemine best model and then test ?
In my mind , you use test fold to test after train and that's your best acc . I also use this thought in my project and learning.

I've searched on Internet and still have't find out . It would be great if you can response me ! : )

Bty , In line 171 of your train.py code , you use time as file name , if we train start from a certain epoch , we can't get a continuous train line . In my project , I use runs as a default name . And it seems that I can get a continuous line ?(Maybe I saved checkpoint and trained from a certrain epoch?)

I've noticed this project was been updated 5 month ago , and I'd like to thank you again !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant