-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
两张3090微调CogVLM的可能性? #479
Comments
BF16不可能(模型本身推理就要占用36G,微调的时候图像的padding也是要加入到token中的,仅预留了8G-10G是不行的),int4魔改代码理论可行,但是实际操作还是比较困难的。 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
请教下,有大佬尝试过使用两张RTX3090微调CogVLM嘛?有可能性嘛?
The text was updated successfully, but these errors were encountered: