-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about Benchmarks #7
Comments
A question: is the model architecture (figure b in the paper) is the same as the implementation (modeling.py file attention_layer function and transformer_model function ? |
Hi, I also meet the same problem of the benchmarks. I run the BPR-MF (code from https://github.com/duxy-me/ConvNCF) on the same dataset of this paper, with the same population-based negative sampling method for test set. The BPR-MF performance is much higher than the result reported in this paper. May I know if you could provide your experimental code for the baselines? |
The thing about the data preprocessing in BERT4rec is that only ml-1m.txt is identical with SASRec data. Beauty.txt and Steam.txt of both articles are different, referring to github repo for SASRec. https://github.com/kang205/SASRec/tree/master/data |
No description provided.
The text was updated successfully, but these errors were encountered: