You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for providing this great repo!
You mentioned in your FlexMatch paper that a batch norm controller is introduced in the codebase to prevent performance crashes for some algorithms. You mentioned that Mean Teacher, Pi-model and MixMatch might be unstable if update Batchnorm for both labeled and unlabeled data in turn. Does this have to do with multi-GPU training? (if I use a single GPU, will this instability persist?)
Alternatively, can i simply freeze the batchnorm when forwarding the unlabeled batch?
Hope to hear from you. Thanks!
The text was updated successfully, but these errors were encountered:
Dear Authors
Thanks for providing this great repo!
You mentioned in your FlexMatch paper that a batch norm controller is introduced in the codebase to prevent performance crashes for some algorithms. You mentioned that Mean Teacher, Pi-model and MixMatch might be unstable if update Batchnorm for both labeled and unlabeled data in turn. Does this have to do with multi-GPU training? (if I use a single GPU, will this instability persist?)
Alternatively, can i simply freeze the batchnorm when forwarding the unlabeled batch?
Hope to hear from you. Thanks!
The text was updated successfully, but these errors were encountered: