Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to use additional object attributes in model validation to obtain metrics based on them? #12126

Open
1 task done
Altanus opened this issue May 11, 2024 · 4 comments
Labels
question Further information is requested

Comments

@Altanus
Copy link

Altanus commented May 11, 2024

Search before asking

Question

I'm currently using Ultralytics for key points detection. I've trained my model and I wonder if it is possible to get validation metrics separately for occluded objects and not occluded ones.
And if it is possible to do the same but with custom objects attributes different from labels?

Additional

No response

@Altanus Altanus added the question Further information is requested label May 11, 2024
Copy link

👋 Hello @Altanus, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

Hello! Great question! 👍

For obtaining validation metrics based on attributes like occlusion or other custom attributes, YOLOv8 directly does not separate metrics based on them during validation. However, you can implement a custom evaluation metric.

For instance, you could modify your validation set to include a flag for occlusion in your labels, and then during validation, segment the results based on these flags to compute metrics separately. Here’s a simple conceptual outline using Python:

from ultralytics import YOLO, evaluate

# Load your model
model = YOLO('path/to/your/model.pt')

# Custom evaluation method to compute metrics based on occlusion
def evaluate_occlusion(data, occluded=False):
    results = model.val(data=data)
    if occluded:
        # Filter results for occluded objects based on your custom flag/attribute
        occluded_results = [r for r in results if r.attribute == 'occluded']
        metrics = evaluate(occluded_results)
    else:
        non_occluded_results = [r for r in results if r.attribute != 'occluded']
        metrics = evaluate(non_occluded_results)
    return metrics

# Usage
occluded_metrics = evaluate_occlusion('path/to/your/dataset.yaml', occluded=True)
non_occluded_metrics = evaluate_occlusion('path/to/your/dataset.yaml', occluded=False)

This example assumes your data can be filtered by an ‘attribute’ flag; adjust as per your data structure and requirements. This approach provides flexibility in handling and evaluating various object attributes. Happy coding! 😊

@Altanus
Copy link
Author

Altanus commented May 11, 2024

Hello! Thank you for your answer. I think that's a great way to come up with the issue. And thank you for the code suggestion.
Nevertheless I still have a few questions.

  1. Iterating through results = model.val(data=data) should give a list with metrics. Did you mean it? Or maybe I have to model.predict(data=data) to iterate over predictions?
  2. You've mentioned from ultralytics import YOLO, evaluate. Did you mean some specific module? Can you provide me with an info where I can find it? Or it is expected me to code it by myself? (actually not the best choice as key point detection metrics are quite complicated)

@glenn-jocher
Copy link
Member

Hello!

I'm delighted to clarify those points for you!

  1. Apologies for any confusion. You should indeed use model.predict(data=data) to iteratively obtain predictions for each item in your dataset and then manually apply your evaluation logic based on occlusion or other criteria. model.val() is typically used to assess model performance utilizing the entire validation dataset and does not provide detailed, itemized data needed for attribute-specific metrics unless you manually extract and process these from the output.

  2. Regarding the evaluate function, this was illustrative and assumes implementation of a custom function that handles detailed attribute-based performance metrics. If creating such a function sounds daunting, I recommend checking out the evaluation tools provided by popular libraries like NumPy or Scikit-Learn which can help in calculating metrics if you adjust the input to match the specific conditions (like occlusion).

Here’s a slight modification using predict():

from ultralytics import YOLO

# Load your model
model = YOLO('path/to/your/model.pt')

# Custom evaluation method to compute metrics based on occlusion
def evaluate_occlusion(data, occluded=False):
    results = model.predict(data=data)  # Get predictions for each item
    occluded_results = [r for r in results if r.attribute == 'occluded']
    # Apply your evaluation logic to occluded_results here, such as accuracy, recall, etc.
    metrics = custom_evaluate(occluded_results)  # Implement your metrics calculation
    return metrics

def custom_evaluate(results):
    # Implement your evaluation logic here
    pass

# Usage
occluded_metrics = evaluate_occlusion('path/to/your/dataset.yaml', occluded=True)

I hope this helps streamline your development process! Let me know if there's anything else you need. 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants