New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug Report] All fine tunes for Mistral 7b using sagemaker jumpstart are currently failing. #4553
Comments
I was able to work around this: The bug is still in sagemaker but I was able to workaround it but downloading and unpacking the source from AWS that is used and manually fixing deps by building new linux whl's that don't break. Using the latest whls for linux 2014 for transformers, tokenizer and huggingface_hub will fix it. |
I am also facing the same issue, can you please elaborate more, how you fixed the issue. I am fairly new to this. |
solved the issue by changing the version of transformers to latest which is 4.37.2, which updated the tokenizer and huggingface_hub |
All fine tunes for Mistral 7b using sagemaker jumpstart are currently failing with:
"ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/init.py)"
After changing nothing on my end fine tuning started to fail after many previous successful fine tunes.
To reproduce
Try to fine tune any model using the guides here:
https://aws.amazon.com/blogs/machine-learning/fine-tune-and-deploy-mistral-7b-with-amazon-sagemaker-jumpstart/
The text was updated successfully, but these errors were encountered: