-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] Why is expert parallelism not supported during fp16 training? #810
Comments
razão pela qual o paralelismo especializado pode não ser suportado durante o treinamento FP16 pode ser devido às limitações do próprio FP16. FP16, ou formato de ponto flutuante de meia precisão, usa menos memória e permite que o modelo treine mais rápido. No entanto, nem todas as equações suportam FP16, o que pode limitar seu uso em certos cenários. |
from
Megatron-LM/megatron/training/arguments.py
Line 508 in db3a3f7
compared to the case when ep=1, the difference when ep>1 is that it introduces additional all-to-all communication operation. I'm a bit confused about why this setup does not support fp16 training.
The text was updated successfully, but these errors were encountered: