Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (935) #1822

Open
KDAgusin opened this issue Mar 31, 2024 · 0 comments
Open

Comments

@KDAgusin
Copy link

Hello, good day! I am building a project using poseNet for our Hand Gesture Estimation project with custom hand gesture dataset. I followed how you train your dataset on imagenet classification but applied it towards my posenet. I mounted host folder called "Code" towards the jetson-inference docker and placed all my dataset on the jetson-inference/python/training/classification folder.

I trained them exactly how an imagenet model was trained but instead of calling the imagenet model on my program, I called and organized it for my posenet. I really not quite sure on where I went wrong. My program works well with imagenet but doesn't does it job for the posenet.

Here are my setup specification:
jetson nano 2gb devkit
jetpack 4.6.4
torch (1.10.0)
torchaudio (0.10.0+d2634d8)
torchvision (0.11.0a0+fa347eb)
Python: 3.6.9
OpenCV: 4.5.5 with CUDA: Yes

Below I'll providing my code and the error output it provides:

MY CODE:
`import cv2
import numpy as np
import jetson_inference

from jetson_inference import poseNet
from jetson_utils import Log, cudaFromNumpy, cudaToNumpy

net = poseNet(argv=[
'--model=/jetson-inference/python/training/classification/models/dataset/resnet18.onnx',
'--colors=/jetson-inference/data/networks/Pose-ResNet18-Hand/colors.txt',
'--input_blob=input_0',
'--output_cmap=output_0',
'--output_paf=output_0',
'--labels=/jetson-inference/python/training/classification/models/dataset/labels.txt',
'--topology=/jetson-inference/data/networks/Pose-ResNet18-Hand/hand_pose.json',
])

cap = cv2.VideoCapture(0) # sulay -1 kon csi imo cam

#image size
width = 640
height = 480

#setting the capture image resolution
cap.set(3,width)
cap.set(4,height)

process frames until EOS or the user exits

while(cap.isOpened()):
ret, img = cap.read()
#img = cv2.rotate(img,cv2.ROTATE_180)
#change image format to cuda memory
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.cvtColor(img, cv2.COLOR_RGB2RGBA).astype(np.float32)
img = cudaFromNumpy(img)

# perform pose estimation (with overlay)
poses = net.Process(img, overlay="links,keypoints")


# print the pose results
print("detected {:d} objects in image".format(len(poses)))

for pose in poses:
    print(pose)
    print(pose.Keypoints)
    print('Links', pose.Links)

img = cudaToNumpy(img, width, height, 3) #change to 3 kon hinay ang pose est 
img = cv2.cvtColor(img, cv2.COLOR_RGBA2RGB).astype(np.uint8)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)

         
cv2.imshow("Pose Hand ",img)

key = cv2.waitKey(1) & 0xFF

if key != -1:  # Check if a key was pressed at all
    if key == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
`

OUTPUT FROM CMD LINE:

root@ubuntu:/Code# python3 gesture_detection.py

poseNet -- loading pose estimation model from:
-- model /jetson-inference/python/training/classification/models/dataset/resnet18.onnx
-- topology /jetson-inference/data/networks/Pose-ResNet18-Hand/hand_pose.json
-- colors /jetson-inference/data/networks/Pose-ResNet18-Hand/colors.txt
-- input_blob 'input_0'
-- output_cmap 'output_0'
-- output_paf 'output_0'
-- threshold 0.150000
-- batch_size 1

[TRT] topology -- keypoint 0 palm
[TRT] topology -- keypoint 1 thumb_1
[TRT] topology -- keypoint 2 thumb_2
[TRT] topology -- keypoint 3 thumb_3
[TRT] topology -- keypoint 4 thumb_4
[TRT] topology -- keypoint 5 index_finger_1
[TRT] topology -- keypoint 6 index_finger_2
[TRT] topology -- keypoint 7 index_finger_3
[TRT] topology -- keypoint 8 index_finger_4
[TRT] topology -- keypoint 9 middle_finger_1
[TRT] topology -- keypoint 10 middle_finger_2
[TRT] topology -- keypoint 11 middle_finger_3
[TRT] topology -- keypoint 12 middle_finger_4
[TRT] topology -- keypoint 13 ring_finger_1
[TRT] topology -- keypoint 14 ring_finger_2
[TRT] topology -- keypoint 15 ring_finger_3
[TRT] topology -- keypoint 16 ring_finger_4
[TRT] topology -- keypoint 17 baby_finger_1
[TRT] topology -- keypoint 18 baby_finger_2
[TRT] topology -- keypoint 19 baby_finger_3
[TRT] topology -- keypoint 20 baby_finger_4
[TRT] topology -- skeleton link 0 1 5
[TRT] topology -- skeleton link 1 1 9
[TRT] topology -- skeleton link 2 1 13
[TRT] topology -- skeleton link 3 1 17
[TRT] topology -- skeleton link 4 1 21
[TRT] topology -- skeleton link 5 2 3
[TRT] topology -- skeleton link 6 3 4
[TRT] topology -- skeleton link 7 4 5
[TRT] topology -- skeleton link 8 6 7
[TRT] topology -- skeleton link 9 7 8
[TRT] topology -- skeleton link 10 8 9
[TRT] topology -- skeleton link 11 10 11
[TRT] topology -- skeleton link 12 11 12
[TRT] topology -- skeleton link 13 12 13
[TRT] topology -- skeleton link 14 14 15
[TRT] topology -- skeleton link 15 15 16
[TRT] topology -- skeleton link 16 16 17
[TRT] topology -- skeleton link 17 18 19
[TRT] topology -- skeleton link 18 19 20
[TRT] topology -- skeleton link 19 20 21
[TRT] poseNet -- keypoint 00 'palm' color 200 200 200 255
[TRT] poseNet -- keypoint 01 'thumb_1' color 215 0 0 255
[TRT] poseNet -- keypoint 02 'thumb_2' color 194 0 0 255
[TRT] poseNet -- keypoint 03 'thumb_3' color 172 0 0 255
[TRT] poseNet -- keypoint 04 'thumb_4' color 150 0 0 255
[TRT] poseNet -- keypoint 05 'index_finger_1' color 221 255 48 255
[TRT] poseNet -- keypoint 06 'index_finger_2' color 200 230 43 255
[TRT] poseNet -- keypoint 07 'index_finger_3' color 175 200 35 255
[TRT] poseNet -- keypoint 08 'index_finger_4' color 155 180 31 255
[TRT] poseNet -- keypoint 09 'middle_finger_1' color 152 254 111 255
[TRT] poseNet -- keypoint 10 'middle_finger_2' color 135 228 101 255
[TRT] poseNet -- keypoint 11 'middle_finger_3' color 120 203 89 255
[TRT] poseNet -- keypoint 12 'middle_finger_4' color 105 180 78 255
[TRT] poseNet -- keypoint 13 'ring_finger_1' color 64 103 252 255
[TRT] poseNet -- keypoint 14 'ring_finger_2' color 57 93 227 255
[TRT] poseNet -- keypoint 15 'ring_finger_3' color 51 83 202 255
[TRT] poseNet -- keypoint 16 'ring_finger_4' color 43 72 177 255
[TRT] poseNet -- keypoint 17 'baby_finger_1' color 173 18 252 255
[TRT] poseNet -- keypoint 18 'baby_finger_2' color 156 15 227 255
[TRT] poseNet -- keypoint 19 'baby_finger_3' color 138 12 201 255
[TRT] poseNet -- keypoint 20 'baby_finger_4' color 121 9 177 255
[TRT] poseNet -- loaded 21 class colors
[TRT] TensorRT version 8.2.1
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension '.onnx')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +224, GPU +0, now: CPU 260, GPU 3494 (MiB)
[TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 260 MiB, GPU 3494 MiB
[TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 290 MiB, GPU 3523 MiB
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] found engine cache file /jetson-inference/python/training/classification/models/dataset/resnet18.onnx.1.1.8201.GPU.FP16.engine
[TRT] found model checksum /jetson-inference/python/training/classification/models/dataset/resnet18.onnx.sha256sum
[TRT] echo "$(cat /jetson-inference/python/training/classification/models/dataset/resnet18.onnx.sha256sum) /jetson-inference/python/training/classification/models/dataset/resnet18.onnx" | sha256sum --check --status
[TRT] model matched checksum /jetson-inference/python/training/classification/models/dataset/resnet18.onnx.sha256sum
[TRT] loading network plan from engine cache... /jetson-inference/python/training/classification/models/dataset/resnet18.onnx.1.1.8201.GPU.FP16.engine
[TRT] device GPU, loaded /jetson-inference/python/training/classification/models/dataset/resnet18.onnx
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 307, GPU 3569 (MiB)
[TRT] Loaded engine size: 45 MiB
[TRT] Using cublas as a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU -7, now: CPU 465, GPU 3562 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +240, GPU +17, now: CPU 705, GPU 3579 (MiB)
[TRT] Deserialization required 3770659 microseconds.
[TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +45, now: CPU 0, GPU 45 (MiB)
[TRT] Using cublas as a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 705, GPU 3579 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 705, GPU 3579 (MiB)
[TRT] Total per-runner device persistent memory is 42887680
[TRT] Total per-runner host persistent memory is 24224
[TRT] Allocated activation device memory of size 2408448
[TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +43, now: CPU 0, GPU 88 (MiB)
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 27
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 2408448
[TRT] -- bindings 2
[TRT] binding 0
-- index 0
-- name 'input_0'
-- type FP32
-- in/out INPUT
-- # dims 4
-- dim #0 1
-- dim #1 3
-- dim #2 224
-- dim #3 224
[TRT] binding 1
-- index 1
-- name 'output_0'
-- type FP32
-- in/out OUTPUT
-- # dims 2
-- dim #0 1
-- dim #1 3
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 output_0 binding index: 1
[TRT] binding to output 0 output_0 dims (b=1 c=3 h=1 w=1) size=12
[TRT] binding to output 1 output_0 binding index: 1
[TRT] binding to output 1 output_0 dims (b=1 c=3 h=1 w=1) size=12
[TRT]
[TRT] device GPU, /jetson-inference/python/training/classification/models/dataset/resnet18.onnx initialized.
[ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Segmentation fault (core dumped)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant