Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: cannot import name 'GpuBindingManager' from 'onnxruntime.transformers.io_binding_helper' #20561

Closed
nguyenthekhoig7 opened this issue May 4, 2024 · 4 comments
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@nguyenthekhoig7
Copy link

nguyenthekhoig7 commented May 4, 2024

Describe the issue

When running

python3 demo_txt2img.py --help

I encountered this error:

ImportError: cannot import name 'GpuBindingManager' from 'onnxruntime.transformers.io_binding_helper' (/home/user/anaconda3/envs/env_tmp_onnx_image/lib/python3.9/site-packages/onnxruntime/transformers/io_binding_helper.py)

I inspected it and found out: in the python3.9/site-packages/onnxruntime/transformers/io_binding_helper.py file, there is no phrase 'GpuBindingManager'. Perform Ctrl+F return no result.

My system setup (conda environment):

  • Python version: 3.9
  • onnxruntime version: 1.17.3

I believe this might be importing from an incorrect file. I appreciate every help and suggestion.

To reproduce

  1. Create env and clone
conda create -n env_onnx python==3.9
git clone https://github.com/microsoft/onnxruntime.git
  1. Install dependencies
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion/
python3 -m pip install -r requirements-cuda12.txt
python3 -m pip install --upgrade polygraphy onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
  1. Run the demo file
python3 demo_txt2img.py --help

At this step I received the error ModuleNotFoundError: No module named 'onnxruntime', I installed it by pip install -U onnxruntime.
Then I re-run

python3 demo_txt2img.py --help

and encountered the above error

Urgency

Demo file is not runnable, I have no clue how to fix this without internal insight.

Platform

Linux

OS Version

Ubuntu22.04

ONNX Runtime Installation

Pip Install

ONNX Runtime Version or Commit ID

1.17.3

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 12.4

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label May 4, 2024
@tianleiwu
Copy link
Contributor

tianleiwu commented May 4, 2024

You can git checkout rel-1.17.3 after git clone https://github.com/microsoft/onnxruntime.git to get the demo script for 1.17.3.

To run the demo in main branch, please follow the document to run in in docker (require the step of build onnxruntime from source):
https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md#run-demo-with-docker
Or wait for 1.18 release that will be ready in about two weeks.

@nguyenthekhoig7
Copy link
Author

Hi, thanks for your response.

I tried building from source by first cloning the repo, there was the GpuBindingManager in the code after cloning.
Then I ran

./build.sh --build-wheel

following the installing instruction, but encountered

RuntimeError: CUDA error: out of memory

Can I build wheel using my CPU RAM instead, and how can I do that? If further information is needed, please tell me.
Again, I appreciate your promptly support

@tianleiwu
Copy link
Contributor

build wheel use CPU RAM. The error looks like during testing. You can skip tests if you follow the step I shared in the above link.

export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
sh build.sh --config Release  --build_shared_lib --parallel --use_cuda --cuda_version 12.2 \
            --cuda_home /usr/local/cuda-12.2 --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
            --cmake_extra_defines onnxruntime_BUILD_UNIT_TESTS=OFF \
            --allow_running_as_root
python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*.whl --force-reinstall

@nguyenthekhoig7
Copy link
Author

It worked(after I change 12.2 to my cuda version, and set some lib path in the enviromnent).
Thank you very much @tianleiwu. Hope you have a good day!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

2 participants