Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent learning curve between the suggested configure and presented logs #8

Open
voladorlu opened this issue Feb 7, 2022 · 1 comment

Comments

@voladorlu
Copy link

Hi, I tried the suggested config to run the code as follows. However, the evaluation results are different from the results reported in the logs folder. Does it use a different config to generate the logs?

"python main.py --dataset ali --gnn ngcf --dim 64 --lr 0.0001 --batch_size 1024 --gpu_id 0 --context_hops 3 --pool concat --ns mixgcf --K 1 --n_negs 64"

Take the "Recall@20" as an example, I got Recall@20=0.025 at 10th epoch evaluated on the test data, while the uploaded logs show that it can reach to 0.05. Do I miss something important to reproduce the learning curve shown in the logs? -:)

@voladorlu voladorlu changed the title Inconsistent learning curve between the example configure and presented logs Inconsistent learning curve between the suggested configure and presented logs Feb 7, 2022
@huangtinglin
Copy link
Owner

Could you please provide the information about your running environment? Actually, I have just run the code and it can reproduce the performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants