Issues: DAMO-NLP-SG/Video-LLaMA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
finetune-billa7b-zh inference error shape '[-1, 136]' is invalid for input of size 137
#161
opened May 16, 2024 by
len2618187
inf value occurs during forwarding process when fine-tuning VL branch with LLAVA-150K+MiniGPT4-3.5K+webvid-instruct
#138
opened Jan 4, 2024 by
xuboshen
2 tasks done
Dear author, How much time does it cost to train this model? With what type of GPU cards?
#136
opened Dec 27, 2023 by
zhangyuereal
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM: size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([32001, 4096]) from checkpoint, the shape in current model is torch.Size([32000, 4096]). size mismatch for lm_head.weight: copying a param with shape torch.Size([32001, 4096]) from checkpoint, the shape in current model is torch.Size([32000, 4096]).
#135
opened Dec 23, 2023 by
Amber0913
Previous Next
ProTip!
no:milestone will show everything without a milestone.