-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable_sequential_cpu_offload HuggingFace Diffusers error with sd2 example on T4 GPU #2
Comments
update, also happening on a 3090 GPU
|
Hi @BEpresent |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, I was following this example https://modelserving.com/blog/creating-stable-diffusion-20-service-with-bentoml-and-diffusers
or this by git clone of this example repo https://github.com/bentoml/diffusers-examples/tree/main/sd2
which results in a simple
service.py
file like this:After
bentoml serve service:svc --production
I get the following error (happens also with another custom model that I tried). It seems to be related toenable_sequential_cpu_offload
by HuggingFace.As general info, it runs on a GCS VM instance with T4 GPU - could this be the issue?
The text was updated successfully, but these errors were encountered: