Skip to content

Latest commit

 

History

History
63 lines (41 loc) · 5.83 KB

DATA.md

File metadata and controls

63 lines (41 loc) · 5.83 KB

Dataset preparation

Introduction

In this documentation we introduce how to prepara standard datasets to benchmark UniTrack on different tasks. We consider five tasks: Single Object Tracking (SOT) on OTB 2015 dataset, Video Object Segmentation (VOS) on DAVIS 2017 dataset, Multiple Object Tracking (MOT) on MOT 16 dataset, Multiple Object Tracking and Segmentation (MOTS) on MOTS dataset, Pose Tracking on PoseTrack 2018 dataset. Among them, SOT and VOS are propagation-type tasks, in which only one observation (usually in the very first frame) is given to indicate the object to be tracked, while others are association-type tasks that support to make use of observations in every timestamp given by an automatic detector .

OTB 2015 dataset for SOT

The original source of OTB benchmark does not provide a convenient way to download the entire dataset. Lukily Gluon CV provides a script for easy downloading all OTB video sequences. This script includes both dataset downloading and data processing,simply run this script:

python otb2015.py

and you will get all the 100 sequences of OTB. After this, you need to copy Jogging to Jogging-1 and Jogging-2, and copy Skating2 to Skating2-1 and Skating2-2 or using softlink, following STVIR. Finally, please download OTB2015.josn [Google Drive][Baidu NetDisk] (code:k93s) and place it under the OTB-2015 root. The structrue should look like this:

${OTB_ROOT}
   |——— OTB2015.json
   |        
   └———Basketball/
   | 
   └———Biker/
   | 
   ...

DAVIS 2017 dataset for VOS

Download DAVIS 2017 trainval via this link and unzip it. No other processing is needed.

MOT 16 dataset for MOT

  1. Download MOT-16 dataset from this page.

  2. Get detections for MOT-16 sequences. Here we offer three options:

    • Using ground-truth detections. This is feasible only in the train split (we do not have labels for the test split). Run python tools/gen_mot16_gt.py to prepare the detections.

    • Using three kind of official detections (DPM/FRCNN/SDP) provided by MOT Challenge. Detections are from MOT-17 dataset (the video sequences in MOT-17 are the same as MOT-16), so you may need download MOT-17 and unzip it under the same root of MOT-16 first. Then run python tools/gen_mot16_label17.py to prepare the detections. Can generate detections for both train and test splits.

    • [Recommended] Using custom detectors to generate detection results. You need to first run the detector on MOT-16 dataset and output a series of MOT16-XX.txt files to store the detection results, where XX ranges from 01 to 14. Each line in the .txt file represents a bounding box in format of [frame_index](starts from 1), x, y, w, h, confidence. We provide an example generated by FairMOT detector [Google Drive]/[Baidu NetDisk] (code:k93s). Finally run tools/gen_mot16_fairmot.py to prepare the detections.

      A good point is that you can also download .txt results of other trackers from the MOT-16 leaderboard or of other detectors from the MOT-17 DET leaderboard, and use their detection results with very few modifications on tools/gen_mot16_fairmot.py.

MOTS dataset for MOTS

  1. Download MOT-16 dataset from this page.
  2. Get segmentation masks for MOTS sequences. Here we offer two options:
    • Using ground-truth detections. This is feasible only in the train split (we do not have labels for the test split). Run python tools/gen_mots_gt.py to prepare the detections.
    • [Recommended] Using custom models to generate segmentation masks. You need to first run the model on MOTS dataset and output a series of MOTS-XX.txt files to store the mask results. See here for the output format. Note that we do not use the track "id" field so you can output any number as a place holder. You can download results of off-the-shelf trackers and use their masks, for example, simply download the raw data of results of the COSTA tracker in the bottom of this page. Finally run tools/gen_mot16_fairmot.py to prepare the masks (The script will keep the track "id" field. But again, it should be noted that the track "id" field is ignored when run tracking with UniTrack).

PoseTrack 2018 dataset for Pose Tracking

  1. Register and download PoseTrack 2018 dataset.
  2. Get single-frame pose estimation results. Run a single-frame pose estimator, and save results in a $OBS_NAME.json file.Results should be formatted as instructed here. The "track_id" field is ignored so you can output any number as a place holder.
  3. Put the .json file under $POSETRACK_ROOT/obs/$SPLIT/ folder, where SPLIT could be "train" or "val".