Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about Benchmarks #7

Open
DanialTaheri opened this issue Jul 19, 2020 · 3 comments
Open

Questions about Benchmarks #7

DanialTaheri opened this issue Jul 19, 2020 · 3 comments

Comments

@DanialTaheri
Copy link

DanialTaheri commented Jul 19, 2020

No description provided.

@lshowway
Copy link

lshowway commented Aug 3, 2020

Hi,

Thanks for your organized repository and for sharing your code.
Your paper about BERT4Rec is interesting. I have a few questions and I appreciate it if you could help me understand more about the details of your paper.
I tried to compare the results of BERT4Rec with SASRec on the datasets you have in your paper. I could reproduce your results; however, I get different performance results for SASRec compared to what reported in the paper. My initial guess was that the datasets used are different, but the paper mentions that the data preprocessing is similar to SASRec paper.
I was wondering if I am missing something.
I appreciate it if you could share more details about how you got SASRec results.

Thanks,
Danial

A question: is the model architecture (figure b in the paper) is the same as the implementation (modeling.py file attention_layer function and transformer_model function ?
I think the bidirection in transformer is implemented by self-attention in Multi-Head Attention, so what is the bidirection between Trm blocks in figure b?

@ltz0120
Copy link

ltz0120 commented Sep 7, 2020

Hi,

I also meet the same problem of the benchmarks. I run the BPR-MF (code from https://github.com/duxy-me/ConvNCF) on the same dataset of this paper, with the same population-based negative sampling method for test set. The BPR-MF performance is much higher than the result reported in this paper.

May I know if you could provide your experimental code for the baselines?
Thank you.

@Jwmc999
Copy link

Jwmc999 commented Sep 6, 2021

Hi,
Thanks for your organized repository and for sharing your code.
Your paper about BERT4Rec is interesting. I have a few questions and I appreciate it if you could help me understand more about the details of your paper.
I tried to compare the results of BERT4Rec with SASRec on the datasets you have in your paper. I could reproduce your results; however, I get different performance results for SASRec compared to what reported in the paper. My initial guess was that the datasets used are different, but the paper mentions that the data preprocessing is similar to SASRec paper.
I was wondering if I am missing something.
I appreciate it if you could share more details about how you got SASRec results.
Thanks,
Danial

A question: is the model architecture (figure b in the paper) is the same as the implementation (modeling.py file attention_layer function and transformer_model function ?
I think the bidirection in transformer is implemented by self-attention in Multi-Head Attention, so what is the bidirection between Trm blocks in figure b?

The thing about the data preprocessing in BERT4rec is that only ml-1m.txt is identical with SASRec data. Beauty.txt and Steam.txt of both articles are different, referring to github repo for SASRec. https://github.com/kang205/SASRec/tree/master/data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants