Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Why I got nothing when I tested my lora finetune model #1493

Open
wuwu-C opened this issue May 8, 2024 · 1 comment
Open

[Question] Why I got nothing when I tested my lora finetune model #1493

wuwu-C opened this issue May 8, 2024 · 1 comment

Comments

@wuwu-C
Copy link

wuwu-C commented May 8, 2024

Question

1.I use finetune_lora to finetune the model for 3 epochs, and I modified the save code to ensure every epoch will save non_lora_trainable.bin
2. I merge lora weight for every epoch
3. I tested on model_vqa but my output tensor is null
image

@Vignesh-Valaboju
Copy link

Are your projector weights changing after every epoch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants