-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HNSW + dead tuples: recall loss/usability issues #244
Comments
I consider massive updates/deletes for pgvector a rare use case. It could be recommended to set ef_search with overhead to the number of tuples in the limit clause rather than the exact value for these workload types (Or for all workloads just to be safe). It's easy to evaluate how many dead tuples are expected for a known update/delete rate between vacuums. So setting ef_search to a fixed (big) value is not necessary. |
Hi @alanwli, I'm not sure there's a way to address this that doesn't affect search speed, but it may be worth the tradeoff. I've pushed a solution to kill-prior-tuple branch, but need to think about it more. The two things that will affect search speed are:
|
Hi @ankane
As an end user, I think people would prefer accuracy, and relevancy over speed as long as it doesn't get too slow. |
Here's a rough summary of the situation: As elements get updated/deleted, either speed or recall has to give until the graph can be repaired (which currently happens on vacuum, as it's expensive). To guarantee results in every case, it's possible the entire graph would need to be visited (for instance, after deleting 99% of rows), which could be extremely slow (and likely cause SELECTs to tie up the database CPU). With the current approach, speed will be consistent across queries, but recall will give. The branch now gives up speed for recall by increasing The impact either approach has on recall and speed will be different for each dataset. Also, by default, tables are autovacuumed when 20% of rows (+ 50) change, which should keep the graph in a healthy state in many cases with either approach. |
So, On master branch: on kill-prior-tuple branch: Do I understand it correctly? |
Hi @ankane - thanks for looking into a solution for the problem. I agree with the discussion thus far that this is yet another speed vs. recall tradeoff, and it's likely many other ANN algorithms don't have to worry about since they don't care about transactional semantics. Do you have a sense how much overhead your kill_prior_tuple solution would incur?
|
The built-in Might you be able to do something similar? The underlying assumption of the optimization is "we've just had a non-hot update, there might be more non-hot updated tuples, and those other tuples may already be dead, so let's try to find some dead non-HOT-updated tuples to clean up". Ref: |
When I run |
has there been a solution for this? I am getting the following notice as well. The index creation also takes hours and fails. How can we scale the parameters as our data grows 10-20x?
|
That notice is not related to this issue. Please see the docs on index build time for how to resolve. |
This issue is a follow-up on this discussion in #239.
Scenario: User sets
ef_search
to 10 expecting to get top-10 results back. But the top-10 results in the HNSW index happen to be dead tuples (due to updates & deletes), then the query will return 0 results. While this doesn't impact things like ann-benchmark, it will impact more realistic usecases where applications update/delete data.Desired Behavior: HNSW index returns the top-10 results where the tuples aren't dead to avoid causing significant recall loss - e.g. in the scenario above, recall would be 0%. Note that ivfflat has the desired behavior.
I can see some potentially ugly workarounds:
ef_search
ef_search
to specify?ef_search
can be specified up to 1k, which should theoretically minimize the problem but also offers no bounded guarantees - e.g. user can still hit the above problem when the top-1k results of a large dataset in HNSW are deadAnd I think the ugly workarounds will fall apart especially for larger datasets, so having a principled fix in the HNSW index would help real-world users.
Thoughts?
The text was updated successfully, but these errors were encountered: