New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is an issue with freezing parameters #71
Comments
This is because we resize the embedding layer of LLM after adding new tokens, so the parameters of the embedding layer become trainable. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When I observed the parameter quantity and trainable parameter quantity of the model, I found that the trainable parameter quantity in the first stage was not normal
Extra parameters for tokenizer
The specific reason is that the anyToImageVideoAudio.py file first freezes the parameters of llama before initializing the tokenizer, which leads to the occurrence of this problem
The text was updated successfully, but these errors were encountered: