Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimised graphics card recognition and half-precision recognition #1049

Merged
merged 0 commits into from May 21, 2024

Conversation

bfloat16
Copy link
Contributor

@bfloat16 bfloat16 commented May 4, 2024

The torch itself recognises obsolete graphics drivers, so there is no need to restrict older cards.
Requiring graphics memory greater than or equal to 5.5G (that is, 6G or more, but the actual available memory will be 5.98G, etc., so write 5.5G for insurance) will exclude most old cards.
The config directly determines that the CUDA computing power is less than 7.0 and can identify cards that don't support fp16, no need to whitelist them.

@SapphireLab
Copy link
Contributor

The only problem is as stated in issue #808, it is not recommended to use MPS for training on MacOS.

@bfloat16
Copy link
Contributor Author

bfloat16 commented May 5, 2024

The only problem is as stated in issue #808, it is not recommended to use MPS for training on MacOS.

Already modified

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants