-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use pytoch library with libtorch backend when using triton inference server In-Process python API #7222
Comments
/CC @yuzisun |
@sivanantha321 - is it possible to provide the .pt file / instructions on recreating it? |
never mind - I was able to find a model to reproduce it locally. I believe the issue is that the latest public pytorch version as installed via pip conflicts with the torch libraries used for the libtorch backend - will experiment with some potential workarounds |
Thanks for looking into this. |
I made some progress in using the NGC pytorch image as base and then copying in tritonserver binaries into that: However - when doing that with pre-built libraries I still ran into an issue with torchvision as the shared library was imported twice and that caused conflicts (I think that is a fundamental issue with the libtorchvision.so). I then rebuilt the triton pytorch backend without torch vision support (seen in Dockerfile above). However - I haven't been able to confirm with a use case - I was testing out a resnet50 model but didn't get to the stage where the results looked correct to me. I'm giving this as an update here - in case you have time to try / test on your end |
@sivanantha321 - were you able to try the work around? |
@nnshah1 Thanks for the big help! Yes, I tried the workaround and it worked successfully. There is one more thing I like to know., Is there a way to use custom pytorch version other than what's comes with the NGC pytorch image ? |
@sivanantha321 - I believe you would just need to rebuild the pytorch backend with the custom version of pytorch you want to use: |
@Tabrizian , @rmccorm4 , @tanmayv25 for visability. In this work around I searched and replaced the backend pytorch libraries with symlinks to the system ones. that may be a simple recipe for enabling installing pytorch and pytorch backend in the same container w/o doubling the libraries - but needs further review and testing,. |
Description
A clear and concise description of what the bug is.
I am trying to use the newly introduced triton inference server In-Process python API to serve pytorch models using the libtorch backend. I am using pytorch and torchvision libraries to do some pre and post processing of the input data before sending it to the triton server for prediction. But when I try to use pytorch or torchvision i am getting the follwing error.
Triton Server logs:
Triton Information
What version of Triton are you using?
Are you using the Triton container or did you build it yourself?
I am using
nvcr.io/nvidia/tritonserver:24.04-py3
container to serve the model using in-process python API.To Reproduce
Steps to reproduce the behavior.
A simple script to reproduce the error.
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Expected behavior
A clear and concise description of what you expected to happen.
Pytorch and torchvision should work with tritonserver in-process python API
The text was updated successfully, but these errors were encountered: