-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow loss convergence #29
Comments
I'm sorry to hear that. Can you try to repeat your experiments with the original implementation I started from and see if there's any difference? https://github.com/jaehyunnn/ViTPose_pytorch |
I’m sorry, it seems to be a problem with the original project. I will remove the fine tuning part completely from the current state of the repository if it is broken. I would suggest you to use the original vitpose implementation or check if any obvious bug is present in this implementation. Good luck |
Hello,
I'm attempting to perform fine tuning with your implementation (I'm using the commit e8e2ad1 from April 24, as I don't need feet key points).
Unfortunately I think the loss might not converge properly. I tried to run the training without fine tuning (from scratch) - in the first 5 epochs it decreased from 0.0168 to 0.0063, but remained stuck at 0.0063 for the next 25 epochs.
Do you have any suggestions for how to solve it?
I've used the same hyper parameters in your code, but changed the layer decay rate from 0.75 to 1-1e-4.
Thank you for your time and assistance!
The text was updated successfully, but these errors were encountered: