You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Extension works fine, but always uses CPU instead of GPU, even if CUDA is selected in the settings. I noticed ReActor throwing the error message The 'onnxruntime-gpu' distribution was not found and is required by the application, PLEASE, RESTART the Server! I think those events may be connected and it's a bug, because ReActor also states the right CUDA Version and even says - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA
Steps to reproduce the problem
Use lshqqytiger/stable-diffusion-webui-directml fork
Use zluda
Install extension
Make sure CUDA is selected in extension settings
Sysinfo
All on newest version: Windows 11, Firefox, lshqqytiger/stable-diffusion-webui-directml fork, AMD Radeon 7900XTX
Relevant console log
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.8.0-RC
Commit hash: 7071a4a7e42e7d5588f7c3eee44d411953419f8d
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\Programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Collecting onnxruntime-gpu
Using cached onnxruntime_gpu-1.17.1-cp310-cp310-win_amd64.whl.metadata (4.4 kB)
Requirement already satisfied: coloredlogs in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-gpu) (15.0.1)
Requirement already satisfied: flatbuffers in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-gpu) (24.3.7)
Requirement already satisfied: numpy>=1.21.6 in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-gpu) (1.26.2)
Requirement already satisfied: packaging in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-gpu) (23.2)
Requirement already satisfied: protobuf in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-gpu) (3.20.3)
Requirement already satisfied: sympy in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-gpu) (1.12)
Requirement already satisfied: humanfriendly>=9.1 in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from coloredlogs->onnxruntime-gpu) (10.0)
Requirement already satisfied: mpmath>=0.19 in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from sympy->onnxruntime-gpu) (1.3.0)
Requirement already satisfied: pyreadline3 in d:\programme\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-gpu) (3.4.1)
Using cached onnxruntime_gpu-1.17.1-cp310-cp310-win_amd64.whl (148.6 MB)
Installing collected packages: onnxruntime-gpu
CUDA 11.8
Error: The 'onnxruntime-gpu' distribution was not found and is required by the application
+---------------------------------+
--- PLEASE, RESTART the Server! ---
+---------------------------------+
Launching Web UI with arguments: --use-zluda
ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
ControlNet preprocessor location: D:\Programme\a1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2024-04-04 14:02:03,837 - ControlNet - INFO - ControlNet v1.1.443
2024-04-04 14:02:03,912 - ControlNet - INFO - ControlNet v1.1.443
14:02:04 - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA
Loading weights [84d76a0328] from D:\Programme\a1111\stable-diffusion-webui-directml\models\Stable-diffusion\epicrealism_naturalSinRC1VAE.safetensors
Creating model from config: D:\Programme\a1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
2024-04-04 14:02:04,711 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
Additional information
No response
The text was updated successfully, but these errors were encountered:
It's unlikely that ORT-GPU will work with ZLUDA from the box
Gourieff
changed the title
Reactor does work, however it says onnxruntime-gpu is missing and uses cpu instead of cuda
ZLUDA | Reactor does work, however it says onnxruntime-gpu is missing and uses cpu instead of cuda
Apr 5, 2024
First, confirm
What happened?
Problem is a combination of lllyasviel/stable-diffusion-webui-forge#88 and #412.
Extension works fine, but always uses CPU instead of GPU, even if CUDA is selected in the settings. I noticed ReActor throwing the error message The 'onnxruntime-gpu' distribution was not found and is required by the application, PLEASE, RESTART the Server! I think those events may be connected and it's a bug, because ReActor also states the right CUDA Version and even says - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA
Steps to reproduce the problem
Sysinfo
All on newest version: Windows 11, Firefox, lshqqytiger/stable-diffusion-webui-directml fork, AMD Radeon 7900XTX
Relevant console log
Additional information
No response
The text was updated successfully, but these errors were encountered: