-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replaced every instance of np.float with built-in float #399
base: main
Are you sure you want to change the base?
Conversation
…Tracker update method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a bit clarification
yolox/tracker/byte_tracker.py
Outdated
@@ -143,20 +150,29 @@ def __repr__(self): | |||
|
|||
|
|||
class BYTETracker(object): | |||
def __init__(self, args, frame_rate=30): | |||
def __init__(self, args, feature_extractor, global_stracks:List[STrack]=[], frame_rate=30):###<----- added feature_extractor & global_stracks attribute to constructor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I'm not wrong, feature_extractor
should be feature extraction model right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feature Extractor in BYTETracker
Yes, the feature_extractor
is passed as an argument when initializing BYTETracker. This extractor is responsible for generating features from detected objects, which are then used for tracking purposes.
The specific FeatureExtractor
class I use comes from the deep-person-reid
repository:
https://github.com/KaiyangZhou/deep-person-reid/blob/master/torchreid/utils/feature_extractor.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feature Extraction in ByteTrack's Update Method
The update
method in the original ByteTrack repository (https://github.com/ifzhang/ByteTrack.git) does not employ feature extraction for tracking. Instead, it relies solely on a Kalman filter to predict the position of bounding boxes in the current frame based on the position and velocity data associated with objects in the previous frame.
However, this approach has a drawback: the Kalman filter cannot predict the box for a person who has been lost for more than a certain number of frames (30 by default). Consequently, if a person disappears for more than 30 frames, the tracker simply ignores them, and if they reappear, they are assigned a new ID. This is problematic as it fails to recognize the reappearance of the same individual.
To address this issue, I propose storing the features of every unique person detected so far. This way, if a person disappears for more than the buffer size (30 frames by default) and then reappears, we can compare their features with the previously stored ones to determine whether it's the same person or not. This approach allows for more accurate tracking and avoids assigning new IDs to individuals who have simply been out of sight for a short period.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
yolox/tracker/byte_tracker.py
Outdated
|
||
def update(self, output_results, img_info, img_size): | ||
def update(self, frame, output_results, img_info, img_size): ###<----- added frame attribute to update method |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be fine to me, but maybe need documented a bit? might be confused to others?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, you are right. I have used the README.md file from the original ByteTracker repo as is, and that could be confusing. I will modify it to specify what changes I have made and how it is different from the original repository.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code snippet utilizes a modified update() method that takes a frame as an argument to extract features for each individual detected in the image. Here's a breakdown of the process:
Step 1: Person Detection
The code uses YOLOv8 (or any other object detector) to detect individuals in the frame.
YOLOv8 returns a result containing bounding boxes, class IDs, confidence scores, and other information for each detected object.
Step 2: Tracking with BYTETracker
The detected bounding boxes are passed to the BYTETracker class along with the frame.
BYTETracker is responsible for tracking individual objects across multiple frames.
Step 3: Region Cropping
The code crops the regions specified by the bounding boxes from the frame.
These cropped regions represent the individual persons detected in the image.
Step 4: Feature Extraction
The cropped regions are passed as a list to a feature_extractor function.
This function extracts features from each individual region, such as facial features or body posture.
Reason for Taking frame as Input:
The modified update() method takes the frame as input to enable feature extraction for each individual detected in the image. This allows for more accurate tracking and identification of individuals across multiple frames.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the changes! I was working on ByteTrack with DETR and Yolov8 before, even with ReID model, the results seem still bad as I want to get more accurate results for unique counts. Hope this improve the results!
… removed the elements from u_detection for which there was a matching track in removed_stracks
Hey there! I've just replaced every instance of
np.float
with the built-infloat
. Thenp.float
has been deprecated.