Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗? #145

Open
lianglinyi opened this issue Aug 4, 2023 · 6 comments
Labels

Comments

@lianglinyi
Copy link

请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗?

@yuanzhoulvpi2017
Copy link
Owner

可以的,不影响

@lianglinyi
Copy link
Author

我现在微调有些报错,不太知道怎么解决
Target module QuantizedLinear() is not supported. Currently, only torch.nn.Linear and Conv1D are supported.
image

代码里我看应该是在这个地方开始有问题, main.py 165行,166行这。求教
model = model.to(torch.bfloat16)
model = get_peft_model(model, config)

@yuanzhoulvpi2017
Copy link
Owner

你是用了chatglm2自己提供的量化吧。不建议使用它自己提供的量化方式

@lianglinyi
Copy link
Author

是的,全量微调时候,用了chatglm2 int8量化。 那意思是我全量微调的模型不应该量化是吧,我这机器不太行,不量化跑不动

@yuanzhoulvpi2017
Copy link
Owner

将transformers升级到最新版本,然后使用transformers提供的量化方法(本质上它用的就是bitsandbytes量化方式)。你注意改一下。试一试

@lianglinyi
Copy link
Author

感谢大佬,我先试试

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants