Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chaglm2-6b使用LoRA微调,model_parallel_mode设置为True,保存checkpoint再加载会报错 #144

Open
wxz2002 opened this issue Jul 26, 2023 · 4 comments
Labels

Comments

@wxz2002
Copy link

wxz2002 commented Jul 26, 2023

checkpoint的路径我确定是对的
image

@yuanzhoulvpi2017
Copy link
Owner

你有注意到这个issue么#140

@wxz2002
Copy link
Author

wxz2002 commented Jul 26, 2023

你有注意到这个issue么#140
感谢大佬,但是这个issue中好像是通过训练好的LoRA参数进行二次微调,我希望能够通过resume_from_checkpoint恢复训练,使优化器和学习率等参数恢复到保存checkpoint的状态,不知道为什么会报这个错误

@yuanzhoulvpi2017
Copy link
Owner

大佬您好,实在是抱歉,这个功能目前还没实现😂

@wxz2002
Copy link
Author

wxz2002 commented Jul 26, 2023

好的,非常感谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants