Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lora微调后的模型权重要如何在finetune.py中加载,进行二次微调 #279

Open
dle666 opened this issue Apr 19, 2024 · 4 comments
Assignees

Comments

@dle666
Copy link

dle666 commented Apr 19, 2024

现有的模型加载方式会报错,找不到config文件,我是否可以直接用automodel代替,如下图所示
image

@yuhangzang
Copy link
Collaborator

现有的模型加载方式会报错,找不到config文件 -> Can you provide the corresponding error log? Thanks.

@dle666
Copy link
Author

dle666 commented Apr 22, 2024

现有的模型加载方式会报错,找不到config文件 -> 能否提供相应的错误日志?谢谢。

image

@yuhangzang
Copy link
Collaborator

Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?

@Starfulllll
Copy link

Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?

您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功
to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap(
File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap
part_tokens = self.tokenizer(
TypeError: 'NoneType' object is not callable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants