Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion Matrix gets me confused #12584

Open
1 task done
pfcouto opened this issue May 12, 2024 · 2 comments
Open
1 task done

Confusion Matrix gets me confused #12584

pfcouto opened this issue May 12, 2024 · 2 comments
Labels
question Further information is requested

Comments

@pfcouto
Copy link

pfcouto commented May 12, 2024

Search before asking

Question

So, I know this question as been asked before, but I want to add something to those questions. As shows in the image below, I have 317 "Background" being predicted as "Tomato", which is incorrect and should not be happening. Is there a way to know, in which image, which label is misplaced and therefore adding up to these 317? Thanks!

confusion_matrix

Additional

No response

@pfcouto pfcouto added the question Further information is requested label May 12, 2024
Copy link

👋 Hello @pfcouto, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@pfcouto hi there! It looks like you're aiming to identify the specific images associated with those misclassified labels. You can accomplish this by examining the predictions during validation and recording those that are incorrectly identified as "Tomato" when they should be "Background".

Here’s a quick code snippet on how you can do this using Python, assuming you're using YOLOv8 for predictions:

from ultralytics import YOLO

# Load your model
model = YOLO('path/to/your/model.pt')

# Perform validation 
results = model.val()

# Inspect misclassifications
for img, output in zip(results.imgs, results.pred):
    for box in output.boxes:
        if box.cls == 'Tomato' and results.labels[results.imgs.index(img)].cls != 'Tomato':
            print(f'Misclassified Tomato in image: {img}, true label: {results.labels[results.imgs.index(img)].cls}')

This snippet runs a validation step with your trained model and checks for any 'Tomato' classifications where the true label is anything but 'Tomato'. Important images are then printed for further examination. Make sure to adjust the classes as per your setup. Use this as a starting point and tailor it as needed. Hope this helps clear things up! 🍅🔍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants