Skip to content
This repository has been archived by the owner on Mar 22, 2021. It is now read-only.

Confusion Matrix #36

Open
AakashKumarNain opened this issue Aug 25, 2018 · 1 comment
Open

Confusion Matrix #36

AakashKumarNain opened this issue Aug 25, 2018 · 1 comment

Comments

@AakashKumarNain
Copy link

As per the discussions on Kaggle, yours implementation is the only implementation that is fully correct for the given metric but there is one thing that I couldn't understand as per your code. Here are these three functions:

def compute_ious(gt, predictions):
    gt_ = get_segmentations(gt)
    predictions_ = get_segmentations(predictions)

    if len(gt_) == 0 and len(predictions_) == 0:
        return np.ones((1, 1))
    elif len(gt_) != 0 and len(predictions_) == 0:
        return np.zeros((1, 1))
    else:
        iscrowd = [0 for _ in predictions_]
        ious = cocomask.iou(gt_, predictions_, iscrowd)
        if not np.array(ious).size:
            ious = np.zeros((1, 1))
        return ious


def compute_precision_at(ious, threshold):
    mx1 = np.max(ious, axis=0)
    mx2 = np.max(ious, axis=1)
    tp = np.sum(mx2 >= threshold)
    fp = np.sum(mx2 < threshold)
    fn = np.sum(mx1 < threshold)
    return float(tp) / (tp + fp + fn)

def compute_eval_metric(gt, predictions):
    thresholds = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
    ious = compute_ious(gt, predictions)
    precisions = [compute_precision_at(ious, th) for th in thresholds]
    return sum(precisions) / len(precisions)

Now, given the fact that compute_ious function works on a single prediction and it's corresponding groundtruth, ious will be a singleton array. Then, how are you calculating TP/FP from that? Am I missing something here?

@jakubczakon
Copy link
Contributor

Hmm I have to say that I simply ran this evaluation (that was written for DSB-2018), changed iou to coco implementation for speed, added handling for empty prediction and just lived with it :).

That being said I will look into it and get back to you.

Thank you @AakashKumarNain for pointing this out.

jakubczakon added a commit that referenced this issue Oct 13, 2018
* added image channel and params to config (#29)

* exping

* added large kernel matters architecture, renamed stuff, generalized c… (#30)

* added large kernel matters architecture, renamed stuff, generalized conv2drelubn block

* exping

* exping

* copied the old ConvBnRelu block to make sure it is easy to finetune old models

* reverted main

* Depth (#31)

* exping

* exping

* added depth loaders, and depth_excitation layer, adjusted models and callbacks to deal with both

* fixed minor issues

* exping

* merged/refactored

* exping

* refactored architectures, moved use_depth param to main

* added dropout to lkm constructor, dropped my experiment dir definition

* Second level (#33)

* exping

* first stacked unet training

* fixed minor typo-bugs

* fixed unet naming bug

* added stacking preds exploration

* dropped redundant imports

* adjusted callbacks to work with stacking, added custom to_tensor_stacking

* Auxiliary data (#34)

* exping

* added option to use auxiliary masks

* Stacking (#35)

* exping

* exping

* fixed stacking postpro

* Stacking (#36)

* exping

* exping

* fixed stacking postpro

* exping

* added fully convo stacking, fixed minor issues with loader_mode: stacking

* Update architectures.py

import fix

* Update README.md

* Update models.py

reverted to default (current best) large kernel matters internal_channel_nr

* Stacking (#37)

Stacking

* Stacking depth (#38)

* exping

* added depth option to stacking model, dropped stacking unet from models

* Empty non empty (#39)

* exping

* added empty vs non empty loaders/models and execution

* changed to lovasz loss as default from bce

* reverted default callbacks target name
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants