Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MPS support #472

Merged
merged 4 commits into from
Dec 19, 2023
Merged

Add MPS support #472

merged 4 commits into from
Dec 19, 2023

Conversation

soberhofer
Copy link
Contributor

@soberhofer soberhofer commented Dec 16, 2023

Instead of using the boolean parameter use_cuda, we take a parameter of type torch.device. This can be cuda but it can also be torch.device("mps") for example. This adds support for GPU Acceleration on Apple Silicon.

I ran all the tests switching the device to mps, and they all passed.

For now i have only edited the code. Documentation and tutorials still have the current content.
This addresses #471

@soberhofer soberhofer changed the title mps support Add MPS support Dec 16, 2023
@jacobgil
Copy link
Owner

Thanks a lot for this, this is awesome.

I was thinking about simplifying the device completely, by using using device_of (https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html#torch.cuda.device_of).

Could then just check the device of the model, and avoiding specifying the decide completely.

What do you think about that?

@soberhofer
Copy link
Contributor Author

That's a great idea, I was not aware of that function.
I can definitely test that, if we go that route, I think it's important to point this out in the documentation. Since the model as well as the input tensor is provided to grad-cam, we need to choose which device we use (device of model, vs device of input tensor).
I don't really have a preference, so I guess the device of the model is fine.
I'll try to implement it that way.

@soberhofer
Copy link
Contributor Author

soberhofer commented Dec 17, 2023

Thanks a lot for this, this is awesome.

I was thinking about simplifying the device completely, by using using device_of (https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html#torch.cuda.device_of).

Could then just check the device of the model, and avoiding specifying the decide completely.

What do you think about that?

torch.cuda.device_of() only works for tensors. The way it seems to be recommended (see: Here) is to get the device of the first parameter of a model using next(self.model.parameters()).device. What do you think about that?
Doing it this way we only need the device parameter in very few files, which is nice. I tested it using cpu and mps.

@jacobgil
Copy link
Owner

Great!
Just need to update pytorch_grad_cam/fullgrad_cam.py before we can merge, the test is failing there.

I'm thinking of merging this and then making a second pass later to modify the documentation.

@soberhofer
Copy link
Contributor Author

Great! Just need to update pytorch_grad_cam/fullgrad_cam.py before we can merge, the test is failing there.

I'm thinking of merging this and then making a second pass later to modify the documentation.

Fullgrad should be fixed, i overlooked it there.

@jacobgil
Copy link
Owner

jacobgil commented Dec 19, 2023

Thanks a lot for this contribution @soberhofer
It both simplifies the code, and makes it easy to user any device.

Merging this, and will modify the readme afterwards.

Happy holidays!

@jacobgil jacobgil merged commit 00711a2 into jacobgil:master Dec 19, 2023
1 check passed
@soberhofer
Copy link
Contributor Author

Thanks for merging :) And thanks a lot for maintaining this awesome project 👍
Happy holidays to you too!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants