Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA Fails on every launch until I switch to CPU and back #418

Open
3 tasks done
Wowmandeez opened this issue Apr 6, 2024 · 3 comments
Open
3 tasks done

CUDA Fails on every launch until I switch to CPU and back #418

Wowmandeez opened this issue Apr 6, 2024 · 3 comments

Comments

@Wowmandeez
Copy link

First, confirm

  • I have read the instruction carefully
  • I have searched the existing issues
  • I have updated the extension to the latest version

What happened?

As mentioned in the title. When I first open A1111, reactor no longer works on the first try, occasionally throwing errors into the console. After switching to CPU, generating one image I can then switch back to CUDA and all is fine till the next restart.

Surely there must be a fix that someone smarter than me can figure out.

Steps to reproduce the problem

  1. launch webui
  2. enable reactor in img2img and generate.
  3. face doesnt change
  4. switch to cpu in reactor settings
  5. generate
  6. works fine
  7. switch to cuda in reactor settings
  8. works fine till next restart

Sysinfo

sysinfo-2024-04-06-07-45.json

Relevant console log

100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.20s/it]
18:43:11 - ReActor - STATUS - Working: source face index [0], target face index [0]              | 0/1 [00:00<?, ?it/s]
18:43:11 - ReActor - STATUS - Using Loaded Source Face Model: bl_GREAT.safetensors
18:43:11 - ReActor - STATUS - Analyzing Target Image...
2024-04-06 18:43:12.5908883 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-04-06 18:43:12.6933521 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error running postprocess_image: F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py
    Traceback (most recent call last):
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.


    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "F:\Stable\stable-diffusion-webui\modules\scripts.py", line 856, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 450, in postprocess_image
        result, output, swapped = swap_face(
      File "F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 594, in swap_face
        target_faces = analyze_faces(target_img, det_thresh=detection_options.det_thresh, det_maxnum=detection_options.det_maxnum)
      File "F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 302, in analyze_faces
        face_analyser = copy.deepcopy(getAnalysisModel())
      File "F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 145, in getAnalysisModel
        ANALYSIS_MODEL = insightface.app.FaceAnalysis(
      File "F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 48, in patched_faceanalysis_init
        model = model_zoo.get_model(onnx_file, **kwargs)
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
        model = router.get_model(providers=providers, provider_options=provider_options)
      File "F:\Stable\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 21, in patched_get_model
        session = PickableInferenceSession(self.onnx_file, **kwargs)
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
        super().__init__(model_path, **kwargs)
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
        raise fallback_error from e
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
        self._create_inference_session(self._fallback_providers, None)
      File "E:\Projects\AI_art\Automatic1111\stable-diffusion-webui\venv\pyenv.cfg\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.


---
Total progress: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:07<00:00,  7.27s/it]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:07<00:00,  7.27s/it]

Additional information

No response

@Wowmandeez Wowmandeez added bug Something isn't working new labels Apr 6, 2024
@Gourieff
Copy link
Owner

Gourieff commented Apr 6, 2024

Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements)

This is the answer, you have CUDA 12.1 but it seems that you have ORT-GPU for CUDA 11.8
You should uninstall onnxruntime-gpu you have - and let ReActor install the right version according to the docs https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x

@Gourieff Gourieff added ⛔ dependencies conflict and removed bug Something isn't working new labels Apr 6, 2024
@Wowmandeez
Copy link
Author

okay I definitely tried something like this but ended up making things worse for a bit. At the risk of testing your patience, would you be able to dumb it down a bit more for me in terms of steps to take.

@jamesewoo
Copy link

pip uninstall onnxruntime-gpu

then

pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

If there are still issues make sure you followed https://onnxruntime.ai/docs/install/#requirements regarding CUDA and CuDNN.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants