-
Notifications
You must be signed in to change notification settings - Fork 44
Deployment of custom Monai Application Package #454
Replies: 3 comments · 11 replies
-
Did you check if docker is still running? Secondly, did you try restarting the docker daemon and then re-packaging and running the MAP? What are the other errors? Please specify the command that you are using to generate the MAP? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Thanks for the reply @KnightCoder, And, I also need to run the inference in the system with CPU only. Is there a way we can deploy the model which doesn't require GPU for inference? If yes, then it would be a great help. |
Beta Was this translation helpful? Give feedback.
All reactions
-
@vikashg @MMelQin |
Beta Was this translation helpful? Give feedback.
All reactions
-
Thank you @MMelQin
Even though the below error was thrown Is it because nibabel doesnt give the metadata named row_pixel_spacing? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi @ArpanGyawali what are you using for skull stripping? I tried to use FSL's BET tool through a |
Beta Was this translation helpful? Give feedback.
All reactions
-
Also What @MMelQin mentioned about metadata is not the same as the nibabel metadata. if you try |
Beta Was this translation helpful? Give feedback.
All reactions
-
Ignore my comment about |
Beta Was this translation helpful? Give feedback.
All reactions
-
hi @vikashg @MMelQin yes I used deep learning. I used the segmentation method of monai label that uses UNET architecture for training from which i got the model file. The folder structure I used while using dicom file as input was :
skull_strip_operator.py
app.py
Later I needed to use Nifty file as input instead of dicom file so i created a new operator called nifty_loader_operator, that is different from the buildin niftyloader nifty_loader_operator.py
app.py
The skull-strip operator is same as before Doing this i havent got the KeyError for row_pixel_spacing and the code runs completely running using But when i opened the file and the output segmentation i got the dimension mismatch error: Also the other issues I have got are, When loading dicom file as main file in ITK snap When loading nifty file as main and same predicted segmentation in ITK snap Sorry, I know I have asked a lot in this question. |
Beta Was this translation helpful? Give feedback.
All reactions
-
@ArpanGyawali Thanks for the questions. No worries, you had just gotten a taste of how tricky it is to navigate the different format and representation of image in different format/frameworks.
|
Beta Was this translation helpful? Give feedback.
All reactions
-
❤️ 1
-
Thank you so much @MMelQin |
Beta Was this translation helpful? Give feedback.
All reactions
-
Much glad to hear that you sorted this out! |
Beta Was this translation helpful? Give feedback.
-
I am trying to deploy a Monai-Application for Skull stripping that I created using Monai label. I have to deploy the application with the trained model for inference purpose.
So, i packaged the application using the monai-deploy. I am using the version 0.5.1. I followed the documentation of monai-deploy-app-sdk for Spleen Segmentation App.
I pushed the package in GitHub package (ghcr.io) using my GitHub token.
I can run the application using "monai-exec" easily but when I try to run the packaged application using this command:
monai-deploy run ghcr.io/my_organization_in_lower_case/my_image_name:latest input_dir output_dir
I need nvidia-docker to be installed in the system. I tried installing it and NVIDIA Container Toolkit based on this documentation by Nvidia, but I again get various errors and now i am stuck in "docker: Error response from daemon: Unknown runtime specified nvidia"
I also looked at this discussion on running a MAP container without using App SDK CLI. But, also here, Nvidia Container Toolkit and/or nvidia-docker 2 needs to be installed for support of GPU.
Can we package the application using any other non-nvidia(nvcr.io) base image so that it can be run in the system without GPU(inference on only CPU), or it doesnt require nvidia-docker to be installed in the system?
Is there is any way to run the application package only on CPU.
I have been stuck in this problem for a long time. Any help would be highly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions