Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

option error #1

Open
ksp7518 opened this issue Sep 10, 2021 · 7 comments
Open

option error #1

ksp7518 opened this issue Sep 10, 2021 · 7 comments

Comments

@ksp7518
Copy link

ksp7518 commented Sep 10, 2021

Hello,
I trying to detection test with *.mp4
python detect.py -m yolov5s-int8_edgetpu_416.tflite --stream test_1min.mp4

I got this error

usage: EdgeTPU test runner [-h] --model MODEL [--bench_speed] [--bench_image] [--conf_thresh CONF_THRESH] [--iou_thresh IOU_THRESH] [--names NAMES] [--image IMAGE] [--device DEVICE] [--stream] [--bench_coco] [--coco_path COCO_PATH] [--quiet] EdgeTPU test runner: error: unrecognized arguments: test_1min.mp4

and I tried image
python detect.py --model yolov5s-int8_edgetpu_416.tflite --image bus.jpg

INFO:EdgeTPUModel:Confidence threshold: 0.25
INFO:EdgeTPUModel:IOU threshold: 0.45
INFO:EdgeTPUModel:Loaded 80 classes
INFO:EdgeTPUModel:Successfully loaded E:\AI\model\yolov5s-int8_edgetpu_416.tflite
INFO:__main__:Testing on user image: bus.jpg
INFO:EdgeTPUModel:Attempting to load bus.jpg

I can't find detected image. I think there is no saving code

I modified some part for testing.
python detect.py -m yolov5s-int8_edgetpu_416.tflite --stream

image
image

and I could execute detect.py . (CPU load was about 50%)
but the detected box could not be found.

image

How to fix?

@jveitchmichaelis
Copy link
Owner

jveitchmichaelis commented Sep 10, 2021

Hi,

There is currently no support for video files, the stream option is for using a WebCam. I could add support to read from a video easily enough though.

Are you using your own model? yolov5s-int8_edgetpu_416.tflite - can you try with one of the provided ones and check that it works correctly. If your model does not have the detection layer built in, then detections won't be generated.

Also the default output is to <image_name>_det.jpg - but currently this image is not created if there are no detections.

See:

def predict(self, image_path, save_img=True, save_txt=True):

def process_predictions(self, det, output_image, pad, output_path="detection.jpg", save_img=True, save_txt=True, hide_labels=False, hide_conf=False):

Please try to run the code using the instructions in the readme and verify that you can classify e.g. Zidane correctly, using the supplied weights. If that works then we can take a look at the 416 model and try to get that running.

@jveitchmichaelis
Copy link
Owner

Also try using the --device argument, e.g. python detect.py -m yolov5s-int8_edgetpu_416.tflite --stream --device test_1min.mp4

@ksp7518
Copy link
Author

ksp7518 commented Sep 13, 2021

@jveitchmichaelis
I can't execute your model. because this error . but the script command is working

python detect.py -m yolov5s-int8-224_edgetpu.tflite --stream --device 0

INFO:EdgeTPUModel:Confidence threshold: 0.25
INFO:EdgeTPUModel:IOU threshold: 0.45
INFO:EdgeTPUModel:Loaded 80 classes

Process finished with exit code -1073741819 (0xC0000005)

I modified it a bit to make it work
image
I use edgetpu runtime V13 because of the error, my model is working on V13.

Could you check my model if there is detection layer? I can not execute your model for error like above
yolov5s-int8_edgetpu_416.zip

If you can make your model compatible with version 13, I could test it.
edgetpu_compiler --min_runtime_version 13 your_model.tflite
I'm really curious if your model can reduce the CPU load.

Thank you for reply

@jveitchmichaelis
Copy link
Owner

Oh that's a good point, in the case of a webcam, the image is already a stream. I've pushed a fix for that. If you pass a string it will assume an image, otherwise it will assume it's a numpy array.

So the runtime and compiler versions are closely linked. Models should be forward comptatible, i.e. your v13 model should work on a v14 runtime, but not vice versa. I'll take a look at your model, but I'm expecting that users of this repository will have the latest runtime installed (I don't think that will make a difference).

When you say your model is working on v13, do you mean running .invoke() doesn't work at all on a later runtime? Or do you mean that the results don't look correct?

What I'll do for now is add the detect function from Torch to check it works, and then I'll convert it to numpy.

@jveitchmichaelis
Copy link
Owner

Just to check, are you running on Windows? And are you using a conda/venv to run the code? Unfortunately I don't have a USB accelerator here to test with.

I'm not sure what your error is related to - it could be anything tbh. But if you don't get a Python backtrace then I think it's likely an issue with your installation somewhere. Do you know exactly when the code crashes? Try setting the logger to 'debug' and see if you can find out.

Process finished with exit code -1073741819 (0xC0000005)

@ksp7518
Copy link
Author

ksp7518 commented Sep 14, 2021

@jveitchmichaelis Yes, I use windows10 and Anaconda.
After installing v14, the following error occurs when running the program, so v13 is installed and used. Therefore, compiling is also given an option to be compatible with v13.. Process finished with exit code -1073741819 (0xC0000005)
image

when execute the code after installing V14. other examples also give the same error.
image

image
I wonder how about the CPU load when execute your code on your PC.

@jveitchmichaelis
Copy link
Owner

Ok, I can't help with the runtime isse - you should probably post on the Google Coral issues page and see if there's a fix for that. However, as I said, provided the model you're running is compatible with the runtime version you're using, this library should work (there is nothing specific to the runtime version in the inference code).

One important addition in v14 is that the transpose operation is supported, so if you can figure out how to upgrade your runtime that would be useful. You won't be able to run newer models until you do I think.

CPU usage is quite high on the Jetson Nano that I am testing with, but it's an embedded system so that's somewhat expected. I'll get you some actual numbers shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants