Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the data preparation for Gref #4

Open
NatureExplorer24 opened this issue Mar 12, 2024 · 1 comment
Open

Regarding the data preparation for Gref #4

NatureExplorer24 opened this issue Mar 12, 2024 · 1 comment

Comments

@NatureExplorer24
Copy link

Thanks, Authors for sharing the codes.

After running these two lines below:

python build_batches.py -d Gref -t train 
python build_batches.py -d Gref -t val 

it results in a "train_batch" folder with 85,474 files and "val_batch" folder with 9,536 files". However, according to the paper, Gref should contain 104,560 expressions. Could you kindly provide clarification on this inconsistency? Thank you!

@SouthFlame
Copy link
Collaborator

Thanks for your attention to our paper :)

First, we follow the previous studies on the data preparation and the number of expressions on all datasets.

As far as our understanding, G-ref has two different partitions of the validation set, one by UMD and the other by Google. The detail is well explained in Section 4.1 Datasets and Metrics on [1]. In our case, we use only the validation split by Google. Therefore, we thought that the remains were the other validation set we did not use (by UMD).

Note that the all performance comparison on G-ref val. set is evaluated on the same validation split by Google as following the previous studies.

[1] LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

Best regards,

Namyup Kim

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants