You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have correctly loaded my pre-trained embeddings by using a dummy model, and by default they are frezeed, meaning that any gradient operation won't affect them.
But this also means the models cannot be trained, as you can see here:
Traceback (most recent call last):
File "dummy_model.py", line 47, in <module>
best_valid_score, best_valid_result = trainer.fit(train_data, valid_data) File "/progs/recbole/recbole_env/lib/python3.8/site-packages/recbole/trainer/trainer.py", line 439, in fit
train_loss = self._train_epoch( File "/progs/recbole/recbole_env/lib/python3.8/site-packages/recbole/trainer/trainer.py", line 261, in _train_epoch
scaler.scale(loss + sync_loss).backward() File "/progs/recbole/recbole_env/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward( File "/progs/recbole/recbole_env/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Of course, if I put freeze to False, it works.
The question is: is there a way to keep the embedding freezed during the training?
The text was updated successfully, but these errors were encountered:
I have correctly loaded my pre-trained embeddings by using a dummy model, and by default they are frezeed, meaning that any gradient operation won't affect them.
But this also means the models cannot be trained, as you can see here:
Of course, if I put freeze to False, it works.
The question is: is there a way to keep the embedding freezed during the training?
The text was updated successfully, but these errors were encountered: