Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOX OpenVINO Model Batch Inference issue #1741

Open
LiuYiShan613 opened this issue Dec 4, 2023 · 1 comment
Open

YOLOX OpenVINO Model Batch Inference issue #1741

LiuYiShan613 opened this issue Dec 4, 2023 · 1 comment

Comments

@LiuYiShan613
Copy link

I am currently conducting batch inference on the YOLOX model using the OpenVINO IR format. When the batch size is set to 1, the IR model's output shape is (1, 3549, 85). However, when the batch size is set to 'n', the output shape becomes (1, 3549 * n, 85). I anticipate the output shape to be (n, 3549, 85) to align with the requirements of the demo_postprocess function.

Consequently, I attempt to reshape the IR model's output from (1, 3549 * n, 85) to (n, 3549, 85). Unfortunately, the bounding boxes are not predicted correctly. How should I process this output data to ensure it fits the demo_postprocess function?

@tsubasahasumi
Copy link

Hello,

I followed the instructions provided in this guide to create an IR model for OpenVINO. However, like you, I encountered an issue where the model structure became corrupted when setting the input batch size to 'n'.

In my case, when running tools/export_onnx.py to create the ONNX model, I added the --no-onnxsim option, which seemed to resolve the problem for me. While I can't say for certain what caused the issue, it seems that the ONNX Simplifier (onnxsim) might be the culprit during the conversion process.

Hope it helps a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants