Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question for Precision Calculation #191

Open
hxlee309 opened this issue Oct 2, 2018 · 3 comments
Open

Question for Precision Calculation #191

hxlee309 opened this issue Oct 2, 2018 · 3 comments

Comments

@hxlee309
Copy link

hxlee309 commented Oct 2, 2018

Hi,

I check the folder KittiSeg/DATA/data_road/testing, seems there's only testing images without the corresponding ground truth images. However, after I run $python evaluate.py, and checked the generated log file in KittiSeg/RUNS/KittiSeg_pretrained/analyse, I found the following:

2018-10-02 15:13:41,234 root INFO Evaluation Succesfull. Results:
2018-10-02 15:13:41,234 root INFO MaxF1 : 96.0821
2018-10-02 15:13:41,234 root INFO BestThresh : 14.5098
2018-10-02 15:13:41,234 root INFO Average Precision : 92.3620
2018-10-02 15:13:41,234 root INFO Speed (msec) : 84.2132
2018-10-02 15:13:41,234 root INFO Speed (fps) : 11.8746
...

How can the average precision be calculated here without knowing the ground truth images?

Any help will be appreciated.

Thanks,

Hanxiang

@HelloZEX
Copy link

I have the same question.

@KavyaRavulapati
Copy link

In DATA/data_road/testing folder, the annotations are provided in "calib" folder, and the corresponding images are in "image_2" folder. So I guess it evaluates against those annotations. This is what I thought of. Please correct me if I'm wrong.

@lefthandwriter
Copy link

evaluate.py does an evaluation on both the validation and test data.

I think the metrics you're seeing above (MaxF1, BestThresh etc) are for the validation data.

For the test data, it just saves the output images without running a metric evaluation (hence doesn't need the ground truth images in testing.txt). You can refer to submodules/evaluation/kitti_test.py -> create_test_output() function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants