New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ASK] Perfect MAP@k is less than 1 #2091
Comments
I think it is easier to understand if we first explain recall at k: If the number of items is greater than k, then the recall can never reach one, not even with a recommender that knows the test set. Let's say you have only one user that has 12 interactions with items. With k=5 the maximum recall is 5/12, as you will only get 5 recommendations. The MAP uses the precision-recall curve, and in a perfect recommender is just equivalent to the number of recovered elements, which is the recall. Therefore, the maximum MAP achievable is equivalent to the maximum recall achievable, which probably is not 1. |
Description
I have a recommender that, for some users in some folds, has less than$k$ items in the ground truth. Therefore, the $precision@k$ is less than 1, even with a recommender that recommends the ground truth. For that reason, I calculate the results of a perfect recommender for multiple metrics.
By definition, the perfect$ndcg@k$ is 1. I thought this was the case for $MAP@k$ too, but it is not, the average $MAP@5$ of various folds of mine is 0.99, but I even have a fold with a $MAP@5$ of 0.7! I've also noticed that perfect $MAP@k$ is exactly equal to $recall@k$ , but I haven't found any resources that explain this coincidence.
Keep in mind that I'm talking about implicit feedback, and the ideal recommender just assigns 1 in the prediction field.
Other Comments
I'll try and provide an example that causes this "issue".
The text was updated successfully, but these errors were encountered: