-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for vectorized/batch inference? #18
Comments
Have you checked out the |
I think |
Is this problem resolved? I am having the same sort of issue. I have 50k queries and it takes a long time (for me 150k seconds approx or almost 42 hrs) to compute. |
@Smu-Tan @puzzlecollector were you able to find an alternative to this implementation to speed up the process? |
checkout Pyserini. |
Hi @Smu-Tan, @puzzlecollector, and @wise-east, I have just released a new Python-based search engine called |
Hi, Im just wondering is there any method that can speed up the retrieval process? for example, vectorized or batch inference? (it means do the retrieval for a batch/a list of query at the same time).
Since Im trying to use the bm25 to retrieve the top n docs for large data(retrieve over 10k query from 50k docs), and if I do this by calling bm25.get_top_n() in a for loop, the inference time will be unacceptable long.
The text was updated successfully, but these errors were encountered: