Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Cider score for YouCookII are different #14

Open
yangxingrui opened this issue Jan 2, 2024 · 2 comments
Open

The Cider score for YouCookII are different #14

yangxingrui opened this issue Jan 2, 2024 · 2 comments

Comments

@yangxingrui
Copy link

yangxingrui commented Jan 2, 2024

Similarly, I also trained the model in the dataset YouCookII and used the same configuration you provided in GitHub. But my Cider score is much lower than your Cider score.
Here is a comparison:
METEOR | ROGUE_L | CIDEr | Bleu_4
Your Scores 17.94 | 34.55 | 48.7 | 9.4
My Scores 17.27 | 34.3 | 43.71| 9.11
A similar incident occurred on VLCAP.

@Kashu7100
Copy link
Contributor

Thank you for your interest in our work and sorry for the delay of the reply. I cannot tell why you are getting the low CIDEr score but happy to help you figure out. For the time being, I attached the evaluation JSON file for the paper.

model_best_greedy_pred_val_all_metrics.json

If you can share with me how you setup the data (dataloader, feature extraction, etc.), maybe I can help you debug the cause.

@yangxingrui
Copy link
Author

Sorry for the long delay in continuing our discussion. Previously, I was occupied with other matters. The dataloader I'm using is from your published work on VLCAP, and the feature extraction is based on the Data preparation section of your VLTint, without any modifications to its content.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants