Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jetson 4.6.1: opset 17 not supported for onnxruntime-gpu 1.11.0 #377

Open
1 of 2 tasks
TomasBooneHogent opened this issue May 6, 2024 · 9 comments
Open
1 of 2 tasks
Labels
bug Something isn't working

Comments

@TomasBooneHogent
Copy link

Search before asking

  • I have searched the Inference issues and found no similar bug report.

Bug

image
image

Environment

NVIDIA Jetson-AGX
L4T 32.6.1 [ JetPack 4.6 ]
Ubuntu 18.04.5 LTS
Kernel Version: 4.9.253-tegra
CUDA 10.2.300
CUDA Architecture: 7.2
OpenCV version: 4.1.1
OpenCV Cuda: NO
CUDNN: 8.2.1.32
TensorRT: 8.2.1.9
Vision Works: 1.6.0.501
VPI: 1.2.3
Vulcan: 1.2.70

Minimal Reproducible Example

No response

Additional

Jetpack 4.6.1 supports untill onnxruntime 1.11.0
onnxruntime supports untill opset 16
roboflow models are opset 17 models
In need of conversion to opset 16 models for jetson 4.6.1 image

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@TomasBooneHogent TomasBooneHogent added the bug Something isn't working label May 6, 2024
@PawelPeczek-Roboflow
Copy link
Collaborator

hi there,

Thanks for reporting the problem - may I ask when the model was trained? (I assume u trained model @ Roboflow platform)
I am asking as we've this problem reported and reverted changes making models being opset 17 into 16 again.

@TomasBooneHogent
Copy link
Author

TomasBooneHogent commented May 7, 2024

When I update the inference package manually the opset issue is solved indeed, but the CUDAExecutionProvider cannot be found and therefore defaults to the CPUExecutionProvider. Are you sure the latest inference code is compatible with CUDA 10.2.3?

@TomasBooneHogent
Copy link
Author

yolov8s (OD) Generated on Feb 5, 2024 = no issue

yolov8s (OD) Generated on Feb 22, 2024 = no issue

yolov8s (OD) Generated on Mar 5, 2024 = Issue

yolov8s (OD) Generated on Mar 20, 2024 = issue

yolov8s (OD) Generated on Mar 25, 2024 = issue

Roboflow 3.0 (OD) Generated on Mar 13, 2024 = NO issue

Roboflow 3.0 (OD) Generated on Mar 18, 2024 = No issue

Roboflow 3.0 (OD) Generated on Mar 18, 2024 = NO issue

Roboflow 3.0 (OD) Generated on Apr 19, 2024 = Issue

yolov8s (IS) Generated on Feb 26, 2024 = no issue

yolov8l (IS) Generated on Mar 5, 2024 = issue

yolov8s (IS) Generated on Mar 6, 2024 = issue

yolov8s (IS) Generated on Mar 14, 2024 = issue

yolov8s (IS) Generated on Mar 28, 2024 = issue

Roboflow 3.0 (IS) Generated on Mar 14, 2024 = NO issue

Roboflow 3.0 (IS) Generated on Apr 17, 2024 = issue

@TomasBooneHogent
Copy link
Author

OD = Object detection
IS = Instance Segmentation

@TomasBooneHogent
Copy link
Author

So only new models trained will be compatible again?

@PawelPeczek-Roboflow
Copy link
Collaborator

Let's connect through e-mail ([email protected])
What u report is indeed worrying - I would like to be able to take a look at models artefacts to verify what's going on, but I would need to know internal details about ur project at the platform to figure out the issue

@TomasBooneHogent
Copy link
Author

I already reported and shown the issue to Jack Gallo, he knows the details

@TomasBooneHogent
Copy link
Author

It would basically be one line of code if you do Yolo.export(format="onnx", opset=16) if onnxruntime version < 1.12.0

@PawelPeczek-Roboflow
Copy link
Collaborator

Ok, I will ask Jack.
and yes, we also though so in terms of solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants