You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to adapt the package to colour-correct underwater images. I trained a YOLO8 model as described in your blog post to detect the chart I am using. Thanks for the detailed explanations! The model works great, I tested it, and it always detects my chart.
However, when I add this model to the detect_colour_checkers_inference() function as described in the blog post, it often does not work, although the chart is always detected correctly by the YOLO model.
After applying detect_colour_checkers_inference(), sometimes, the chart is 90° or 270° and thus the colours are not detected correctly. Here, I added a folder with example images, my model, the results from the Yolo model for these images, and my code.
Two additional strange things I encountered when using your code from the model:
When I define the inferencer_agpl() function, I had to delete the line image = image.astype(np.float32), otherwise I get this error: AttributeError: 'str' object has no attribute 'astype'
Although I have a MacBook M1 Pro, I had to delete the input device="mps" to the YOLO model() function. Otherwise, I get different and much worse results when I use this line. No idea why.
Could you have a look at my code and indicate to me some steps that I could take to improve it?
Thanks a lot!
I am still trying to figure out how to improve the detection. For some of the photos (but now all), I have the RAW files. I converted them to .png and in the example script (I used libraw because dcraw doesn't support my camera):
As .png, the detection seems to be a bit better, although the images are quite greenish. I think the main problem in detecting the swatches is that there are problems in the orientation of the chart, if I understood correctly, the better detection in .png files hints to the same direction since the orientation is determined based on a slice along the chart.
Here is an example:
Do you have a recommendation on how to change my code to improve the detection of the orientation? I converted the existing .jpg to .png, but this didn't improve the detection, only if I convert them from the raw format.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am trying to adapt the package to colour-correct underwater images. I trained a YOLO8 model as described in your blog post to detect the chart I am using. Thanks for the detailed explanations! The model works great, I tested it, and it always detects my chart.
However, when I add this model to the
detect_colour_checkers_inference()
function as described in the blog post, it often does not work, although the chart is always detected correctly by the YOLO model.After applying
detect_colour_checkers_inference()
, sometimes, the chart is 90° or 270° and thus the colours are not detected correctly. Here, I added a folder with example images, my model, the results from the Yolo model for these images, and my code.Two additional strange things I encountered when using your code from the model:
inferencer_agpl()
function, I had to delete the lineimage = image.astype(np.float32)
, otherwise I get this error:AttributeError: 'str' object has no attribute 'astype'
device="mps"
to the YOLOmodel()
function. Otherwise, I get different and much worse results when I use this line. No idea why.Could you have a look at my code and indicate to me some steps that I could take to improve it?
Thanks a lot!
Beta Was this translation helpful? Give feedback.
All reactions