Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is validation metric calculation happen across all examples list which exist in the entire dataset? #329

Open
zahraegh opened this issue Aug 11, 2022 · 0 comments

Comments

@zahraegh
Copy link

Hello all,
I was following this example and adopting this approach to the implicit recommender system. In my original dataset, there are only instances of positive user interactions with items. I randomly subsampled the space of the remaining items to represent the instances of the negative classes for each user in the dataset. Thus, each instance of ELWC contains user_id and user_features in the Context, and the list of item_id and label in the Examples List (e.g., label = [1, 1, 0, 0, 0]).
My main question here is when metrics such as tfr.keras.metrics.RecallMetric(topn=TOP_K, name='Recall@k') is calculated for the validation set, is it 1) scoring and ranking items that exist inside each Examples List for each user_id or 2) scoring and ranking for all item_id that exist in the entire train and test set?
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant