-
Notifications
You must be signed in to change notification settings - Fork 571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用 llama3 的 lora 微调报错:NotImplementedError: Cannot copy out of meta tensor; no data! #115
Comments
emmm 教程没有在colab环境下做过测试,你可以调试一下 看看bug在哪,也可以使用与教程相同的autodl环境 |
model = AutoModelForCausalLM.from_pretrained(model_dir, device_map='auto', torch_dtype=torch.bfloat16) 将上面的代码改为 model = AutoModelForCausalLM.from_pretrained(model_dir, device_map='cuda', torch_dtype=torch.half, trust_remote_code=True) 重新运行,报错:
|
torch.half 是自动选择半精度加载,有些显卡不支持bf16,但全部显卡应该都支持fp16,这个应该不是导致oom的原因 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
在
colab
环境,执行 llama3lora
微调报错:NotImplementedError: Cannot copy out of meta tensor; no data! 。微调代码如下:The text was updated successfully, but these errors were encountered: