Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking for help for using llama.cpp with Phi3 model and LoRA #7164

Open
SHIMURA0 opened this issue May 9, 2024 · 6 comments
Open

Looking for help for using llama.cpp with Phi3 model and LoRA #7164

SHIMURA0 opened this issue May 9, 2024 · 6 comments

Comments

@SHIMURA0
Copy link

SHIMURA0 commented May 9, 2024

Recently, I have used qLoRA to fine tune Phi3-mini-4k-instruct model, and I have save the LoRA parameters. I plan to merge the lora layer onto the original model in Ollama. I start regularly with llama.cpp, particular, I use the Python script "convert-lora-ggml.py" for transformation of LoRA parameters so that it can be used in Ollama, but I have met the following ERROR:

INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraA (8192, 32) float32 1.00MB
INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraB (3072, 32) float32 0.38MB
INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraA (3072, 32) float32 0.38MB
INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraB (16384, 32) float32 2.00MB
ERROR:lora-to-gguf:Error: could not map tensor name base_model.model.model.layers.0.self_attn.qkv_proj.lora_A.weight
ERROR:lora-to-gguf: Note: the arch parameter must be specified if the model is not llama

(By the way, I have applied the LoRA to the layers: qkv_proj", "gate_up_proj", "down_proj" of Phi3 model)

I will be grateful if someone can give me some suggestion on solving this issue, thanks in advance!

@SHIMURA0
Copy link
Author

I find that the structure of Phi2 and Phi3 are named in different way, in Phi2, llama.cpp works fine on transforming LoRA weights to GGML (where the layer is named Wqkv) while in Phi3 this layer is named qkv_proj, I am thinking is this the problem for the failure of llama.cpp on transforming into GGML?

@teaglin
Copy link

teaglin commented May 13, 2024

Any update on this? I am running into the same issue. LoRA runs correctly with transformers, but when I convert to llama cpp. It gives me non sense output.

@SHIMURA0
Copy link
Author

hope this will be fixed as soon as possible

@SHIMURA0
Copy link
Author

The reason is that llama.cpp treats phi3 as llama architecture, i.e., splitting the merged qkv_proj into q_proj, k_proj and v_proj layers. One way posted by @Raibows at vllm-project/vllm#4715 is to convert the tensor weight of your adapter/lora checkpoint to match it where he gives the script https://gist.github.com/Raibows/079713a060f0c49c8f3b47c227aff722.

I have tested and it is successful for transforming LoRA weights into GGML, but there is another problem that Ollama cannot integrate this GGML LoRA weights back into Phi3-instruct, I think we should somehow merge back the LoRA weights...

@SHIMURA0
Copy link
Author

anyone?

@dimitristaufer
Copy link

I have the same issue...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants