Skip to content

Codebase containing the scripts of training/testing of the proposed VRU-Net for Efficient 2D Human Pose Estimation

Notifications You must be signed in to change notification settings

Kartikaeya/VRU_Net

Repository files navigation

VRU_Net

Codebase containing the scripts of training/testing of the proposed VRU-Net for Human Pose Estimation. More details about the project can be found in this report.

Setup

COCO Dataset

  1. Download the COCO training data from here and unzip it in $Root/Data/COCO/.
  2. Download the COCO validation data from here and unzip it in $Root/Data/COCO/.
  3. Download the annotations file from here and unzip it in $Root/Data/COCO/annotations/.
  4. Install the packages written in $Root/required packages.txt.

Directory Structure

After doing the above setup the directory should look like:

$Root
├── Data
│   ├── COCO
│   │   ├── Annotations
|   |   |   ├── person_keypoints_train2017.json
|   |   |   └── person_keypoints_val2017.json
|   |   ├── train2017
|   |   |   ├── img1.jpg
|   |   |   ├── img2.jpg
|   |   |   └── ...
|   |   └── val2017
|   |       ├── img1.jpg
|   |       ├── img2.jpg
|   |       └── ...
|   └── VRU
│       ├── Annotations
|       |   ├── bbox.json
|       |   └── vru_keypoints_val.json
|       └── images
|           └── val
|               ├── img1.jpg
|               ├── img2.jpg
|               └── ...
├── pics/
├── runs/
├── trained_models/
├── Data_Loader.ipynb
├── Data_Loader.py
├── Inference.ipynb
├── Modify_annotations.ipynb
├── Modify_annotations.py
├── model.ipynb
├── model.py
├── required_packages.txt
├── train.ipynb
└── .gitignore

Training

  1. Make sure that setup is done and the files are placed as in Directory Structure.
  2. Modify the hyperparameters in the second block of train.ipynb.
  3. Run the Notebook train.ipynb

Testing

  1. Download the pretrained model from google drive and place it in trained_models.
  2. Run Inference.ipynb.

Results

Note : The model was trained for only half an epoch so the results are primitive.

Ground Truth Predicted Heatmap
Ground Truth Predicted Heatmap
Ground Truth Predicted Heatmap
Ground Truth Predicted Heatmap
Ground Truth Predicted Heatmap
Ground Truth Predicted Heatmap

To-Do

  • Add inference code.
  • Add results.
  • Add training and validation curve.
  • Add code for extracting the final keypoint location from the predicted heatmap.
  • Add network architecture diagram.

Conclusion

It ain't much but its honest work.

About

Codebase containing the scripts of training/testing of the proposed VRU-Net for Efficient 2D Human Pose Estimation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published