Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to select specific CUDA #66

Open
xzitlou opened this issue Feb 18, 2024 · 1 comment
Open

how to select specific CUDA #66

xzitlou opened this issue Feb 18, 2024 · 1 comment

Comments

@xzitlou
Copy link

xzitlou commented Feb 18, 2024

Hello there, I'm trying to change the CUDA selection, I saw this:

def run_inference(args, gpu_num, gpu_no, **kwargs):
    ...
    model = model.cuda(gpu_no)
    ...

if __name__ == '__main__':
    ...
    rank, gpu_num = 0, 1
    run_inference(args, gpu_num, rank)

I tried to change rank to 3 so I can use my third CUDA and also adding gpu_no=3 as param:

Traceback (most recent call last):
  File "scripts/evaluation/inference.py", line 138, in <module>
    run_inference(args, gpu_num, rank, gpu_no=3)
TypeError: run_inference() got multiple values for argument 'gpu_no'
@RuojiWang
Copy link

RuojiWang commented Mar 13, 2024

I happened to encounter this problem. After reading code and testing, I did not need to modify the original inference code. I just add a line before the run_inference method in the main function: "os.environ['CUDA_VISIBLE_DEVICES'] = '7'" , so that pytorch will run on the graphics card numbered 7, which is the eighth graphics card.

In this way, I can choose the graphics card to use without setting parameters such as rank, gpu_num and so on.

The following is the sample code for inference.py. I only added the line os.environ['CUDA_VISIBLE_DEVICES'] = '7'.
微信截图_20240313152056

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants