-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v2 BUG]: LightningModule has parameters that were not used #853
Comments
I can reproduce this issue on two Linux machines, using python 3.11/3.12, torch 2.2/2.3, and lightning 2.2.4 |
Are these on machines with multiple GPUs? If so, what happens when you use |
Yes, the machines have multiple GPUs. But when I try that, I get this error:
Perhaps we need to change how the |
related: Lightning-AI/pytorch-lightning#17212 |
After adding the following code block to the MPNN class, I found that there are two parameters (
|
I wonder if changing |
Using
How about change However, another issue arises while running
|
@shihchengli I don't understand why the latter works but the former doesn't - is
Yes please do break that out into another issue. |
@JacksonBurns It would be an integer |
Describe the bug
When trying to train the example Chemprop model with:
I get the error message:
Expected behavior
The trained model is not created in train_example/model_0/best.pt, instead in this path only train_example/model_0/trainer_logs/version_0/hparams.yml exists due to some issue in training.
Environment
Chemprop 2 installed from source using pip, Python 3.11.9, OS: Linux
Error Stack Trace
The text was updated successfully, but these errors were encountered: