-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to accelerate fine tune model with TPU using happy transformer? #304
Comments
Try saving the model and then load it into whatever platform you want to use for acceleration. |
Im talking about to fine tune for downstream task or other data set. |
Thanks for the clarification. I thought you were referring to using something like Hugging Face's Optimum library. Happy Transformer currently doesn't support using a TPU for training |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, I'm trying to accelerate fine tune a model for a specific task using happy transformer.
Can someone show how to do this !
The text was updated successfully, but these errors were encountered: