-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ByteTrack memory consumption increases indefinitely #1164
Comments
Hi, @tc360950! 👋🏻 I completely agree that the problem exists. Why do you think storing values from the last pass is enough? |
Hi @SkalskiP,
where it's used to get rid of lost tracks which have been marked for removal (i.e. those which have not been matched to a detection for a predefined time). Since tracks marked for the removal in the latest pass are added to
lost tracks which were removed on pass no. N will actually be removed (i.e. moved to
Now, as to why this is enough. Once a track lands in I agree that it's a risky change, since there are no unit tests for ByteTrack (I can of course come up with some scripts which compare results from both versions on hand crafted or random data (which I did before raising this issue) but it still seems "not quite right"). There is an open PR with unit tests for ByteTrack. I have not looked at it but maybe that's the best way to introduce this change - add good unit tests for ByteTrack. |
Hi @tc360950 👋🏻 Thank you for explaining your thought process. The ByteTrack code was transferred to Supervision (with minor changes) from another codebase, which is why it slightly deviates from the writing style we have in Supervision and, as you rightly observed, is not tested at all. Thanks a lot for the analysis you conducted. |
I'm closing this PR as #1166 was just merged. |
Search before asking
Bug
ByteTrack stores removed tracklets in a list, which is extended with new removed tracklets after every pass (line 489 in byte_tracker/core.py:
As a result, size of
removed_tracks
collection grows unbounded and computational cost ofsub_tracks(self.lost_tracks, self.removed_tracks)
increases with every pass.The fix is simple - we only need to keep latest (i.e. the one from the last pass) removed tracklets in the collection - I've submitted a corresponding PR.
Environment
Not applicable
Minimal Reproducible Example
This simple script shows that update efficiency decreases steadily:
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: