New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于llm_instruction_reranker的save方法的问题 #749
Comments
这里的方法主要是针对lora设计的
|
好的,感谢~ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
首先非常感谢贵团队杰出的开源工作,真的为我们提供了非常多的便利。
我在用llm_instruction_reranker的时候遇到一个问题,希望得到解惑
问题在于下面这个方法,他来自:https://github.com/FlagOpen/FlagEmbedding/blob/13da7435aba2c4cfbbd7caa4c595fe4862f6ba19/FlagEmbedding/llm_reranker/finetune_for_instruction/trainer.py#L9C2-L29C1
这里如果是lora的话会调用modeling内修改的save方法,这样出来的模型是符合预期的,但是如果全量的话,会使用默认的save方法,这时模型的key就变成了model.xxx(应该为xxx),这样就不能使用AutoModelForCausalLM加载了,请问这里是有什么原因吗?不应该都是用modeling内的save方法吗,期待解惑~
The text was updated successfully, but these errors were encountered: