Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6) #25

Open
LYH-YF opened this issue Aug 7, 2022 · 0 comments
Open
Labels
bug Something isn't working

Comments

@LYH-YF
Copy link
Owner

LYH-YF commented Aug 7, 2022

When use a GPU to train any model with k-fold cross validation, it seems all right when running the first fold and starts to train model slowly when running the second fold. Actually, the GPU is not used to train model.
It is caused by the code of saving checkpoint. All the parameters(parameters in config object) are saved in a json file when saving checkpoint. The important is, config['device']=torch.device('cuda') can't be parsed to json format. However, this parameter is deleted directly from config object. So when running another fold, config['device'] can't be found, so the model is not on the GPU.

@LYH-YF LYH-YF added the bug Something isn't working label Aug 7, 2022
@LYH-YF LYH-YF changed the title Can't Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6) Can Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6) Aug 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant