This repository has been archived by the owner on Dec 31, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 105
How to configure the parameters of the main thread in the case of multiple workers? #76
Comments
I am not entirely sure what you are trying to achieve. Maybe |
no way. This seems to be configured every time the worker is created. At present, an error is still reported.
error message:
error message docs: https://discuss.pytorch.org/t/not-using-multiprocessing-but-getting-cuda-error-re-forked-subprocess/54610 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
At present, I want to run a torch program with multiple workers, but GPU CUDA needs to configure compatible parameters in the main thread. At present, the relevant configuration documents are not found in the documents
The text was updated successfully, but these errors were encountered: