Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deepstream 6.4 does not work with YOLOv5 on Jetson in Docker #511

Open
pktiuk opened this issue Feb 13, 2024 · 0 comments
Open

Deepstream 6.4 does not work with YOLOv5 on Jetson in Docker #511

pktiuk opened this issue Feb 13, 2024 · 0 comments

Comments

@pktiuk
Copy link

pktiuk commented Feb 13, 2024

I am unable to build engine according to instructions here: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv5.md using NVIDIA Jetson. (everything works on regular dGPU with x86)

Reproduction steps

Enable displaying windows in docker:

xhost +local:

Run Deepstream 6.4 image with available GPU

docker run --gpus=all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:6.4-triton-multiarch bash

Go through steps in docs

You can just paste:

# install some packages to avoid warning
apt install -y kmod

git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
cd DeepStream-Yolo/
git clone https://github.com/ultralytics/yolov5.git
cd yolov5
pip3 install cmake
pip3 install -r requirements.txt
pip3 install onnx onnxsim onnxruntime

cp ./../utils/export_yoloV5.py ./
wget https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt
python3 export_yoloV5.py -w yolov5s.pt --dynamic
# Build
cd ..
CUDA_VER=12.2 make -C nvdsinfer_custom_impl_Yolo

# copy engine and labels
cp ./yolov5/yolov5s.onnx ./
cp ./yolov5/labels.txt  ./

# Edit the deepstream_app_config file
sed -i.bak 's/config_infer_primary.txt/config_infer_primary_yoloV5.txt/g' ./deepstream_app_config.txt

deepstream-app -c deepstream_app_config.txt

Results

When I go through these steps on my x86 machine everything looks fine

Building the TensorRT Engine
Building complete

But on Jetson (AGX Orin) I have errors:

deepstream-app -c deepstream_app_config.txt
......
WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine
ERROR: [TRT]: 10: Could not find any implementation for node /0/model.24/Expand_2.
ERROR: [TRT]: 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /0/model.24/Expand_2.)
Building engine failed

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:04:21.811708224  2619 0xaaaadebb0040 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
.........
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.4/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant