This repository has been archived by the owner on Mar 22, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 44
Confusion Matrix #36
Comments
Hmm I have to say that I simply ran this evaluation (that was written for DSB-2018), changed iou to coco implementation for speed, added handling for empty prediction and just lived with it :). That being said I will look into it and get back to you. Thank you @AakashKumarNain for pointing this out. |
jakubczakon
added a commit
that referenced
this issue
Oct 13, 2018
* added image channel and params to config (#29) * exping * added large kernel matters architecture, renamed stuff, generalized c… (#30) * added large kernel matters architecture, renamed stuff, generalized conv2drelubn block * exping * exping * copied the old ConvBnRelu block to make sure it is easy to finetune old models * reverted main * Depth (#31) * exping * exping * added depth loaders, and depth_excitation layer, adjusted models and callbacks to deal with both * fixed minor issues * exping * merged/refactored * exping * refactored architectures, moved use_depth param to main * added dropout to lkm constructor, dropped my experiment dir definition * Second level (#33) * exping * first stacked unet training * fixed minor typo-bugs * fixed unet naming bug * added stacking preds exploration * dropped redundant imports * adjusted callbacks to work with stacking, added custom to_tensor_stacking * Auxiliary data (#34) * exping * added option to use auxiliary masks * Stacking (#35) * exping * exping * fixed stacking postpro * Stacking (#36) * exping * exping * fixed stacking postpro * exping * added fully convo stacking, fixed minor issues with loader_mode: stacking * Update architectures.py import fix * Update README.md * Update models.py reverted to default (current best) large kernel matters internal_channel_nr * Stacking (#37) Stacking * Stacking depth (#38) * exping * added depth option to stacking model, dropped stacking unet from models * Empty non empty (#39) * exping * added empty vs non empty loaders/models and execution * changed to lovasz loss as default from bce * reverted default callbacks target name
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
As per the discussions on Kaggle, yours implementation is the only implementation that is fully correct for the given metric but there is one thing that I couldn't understand as per your code. Here are these three functions:
Now, given the fact that
compute_ious
function works on a single prediction and it's corresponding groundtruth,ious
will be a singleton array. Then, how are you calculatingTP/FP
from that? Am I missing something here?The text was updated successfully, but these errors were encountered: