Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference on video #3

Open
constantinfite opened this issue Feb 21, 2022 · 12 comments
Open

Inference on video #3

constantinfite opened this issue Feb 21, 2022 · 12 comments

Comments

@constantinfite
Copy link

Hi I would like to know if it is possible to run inference on a video ?
Thanks

@jveitchmichaelis
Copy link
Owner

jveitchmichaelis commented Feb 21, 2022

This should be straightforward using the --stream parameter. Pass in --device as your filename and it should work. It's using cv2.VideoCapture(device) to load the stream which should work for either a physical device (like a webcam) or a file on disk.

If not, it should be easy to modify:

https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html

Currently there isn't support for outputting a labelled video file, but that would be fairly straightforward with cv2.VideoWriter in the loop.

Let me know how you get on!

@constantinfite
Copy link
Author

Thanks for you answer !
I try this command python3 detect.py -m best-shark-yolov5s-int8.tflite device short_video.mp4 --stream but it load the wrong class. It detects bicycle and in my model I train to detect shark.

@jveitchmichaelis
Copy link
Owner

jveitchmichaelis commented Feb 21, 2022

You need to provide a dataset/names file (--names), you should have one of these for your dataset that you used for training. See this for example. If you need to make a copy, just update nc and the names list with your own classes.

The tflite file alone doesn't have any information about class names, it just returns an ID and this is by default mapped to COCO class names.

@constantinfite
Copy link
Author

constantinfite commented Feb 21, 2022

Ok thanks
And for saving the video I added this but I don't know which image I have to save after.

while True:
          fourcc = 'mp4v'  # output video codec
          fps = cam.get(cv2.CAP_PROP_FPS)
          w = int(cam.get(cv2.CAP_PROP_FRAME_WIDTH))
          h = int(cam.get(cv2.CAP_PROP_FRAME_HEIGHT))
          vid_writer = cv2.VideoWriter("exported.mp4", cv2.VideoWriter_fourcc(*fourcc), fps, (w>

          try:
            res, image = cam.read()

            if res is False:
                logger.error("Empty image received")
                break
            else:
                full_image, net_image, pad = get_image_tensor(image, input_size[0])
                pred = model.forward(net_image)

                model.process_predictions(pred[0], full_image, pad)

                tinference, tnms = model.get_last_inference_time()
                logger.info("Frame done in {}".format(tinference+tnms))
                vid_writer.write( ???? )

@jveitchmichaelis
Copy link
Owner

If you modify the process_predictions function https://github.com/jveitchmichaelis/edgetpu-yolo/blob/main/edgetpumodel.py, have it return output_image (you might want to comment out the imwrite there). save_img is True by default so I think it's already annotating the images - do you get an output file created?

@constantinfite
Copy link
Author

constantinfite commented Feb 21, 2022

Yea it works thanks but look at my detection. The bounding box takes all the image. The detection is not working.
image

Do you have an idea why it is like this ?

@jveitchmichaelis
Copy link
Owner

jveitchmichaelis commented Feb 21, 2022 via email

@constantinfite
Copy link
Author

constantinfite commented Feb 22, 2022

So I exported my model using this command python export.py --weights best-shark-yolov5s.pt --include tflite --int8.
I get a file best-shark-yolov5s-int8.tflite.
I run the detection on simple image but it takes very long time to do inference on simple image 18 seconds ! And I am on my computer with a RTX 2060
The detection is working at least :
image

@jveitchmichaelis
Copy link
Owner

jveitchmichaelis commented Feb 22, 2022 via email

@constantinfite
Copy link
Author

constantinfite commented Feb 22, 2022

It's my bad, I only run inference on coral with the tensorflow lite model. But I have to do the Edge TPU Compiler before running on the coral.
I try to do the cmd edgetpu_compiler -sa yolov5s-224-int8.tflite -d -t 600 with my model but it says

Edge TPU Compiler version 16.0.384591198
Searching for valid delegate with step 1
Try to compile segment with 261 ops
Started a compilation timeout timer of 600 seconds.
Compilation child process completed within timeout period.
Compilation failed!
Try to compile segment with 260 ops
Intermediate tensors: StatefulPartitionedCall:0_int8
Started a compilation timeout timer of 600 seconds.
Compilation child process completed within timeout period.
Compilation failed!
Try to compile segment with 259 ops
Intermediate tensors: model/tf_detect/Reshape_5,model/tf_detect/Reshape_1_requantized,model/tf_detect/Reshape_3_requantized

I try with the classic model yolov5s and the detection works great on the coral on a video. So It's my model the problem I think. The model was train on the yolov5-3.1 version so maybe it's deprecated

@constantinfite
Copy link
Author

I did the step of the Edge TPU Compiler on google cloud for my model and it works but when I run the detection on my coral with the edge model it has the same behaviour as previous : the detection is slow and the bounding box are not correct.

@jveitchmichaelis
Copy link
Owner

Ok I'll take a look at the model you sent over when I get a chance. It's possible that the compilation for edgepu makes the model perform poorly? if we can't figure it out, you can also contact the EdgeTPU guys directly about this, they're generally quite helpful and can look at your input/output models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants