Skip to content

FORTH-ModelBasedTracker/MonocularRGB_3D_Handpose_WACV18

Repository files navigation

Monocular RGB, real time 3D hand pose estimation in the wild

This repository contains scripts for testing the work: "Using a single RGB frame for real time 3D hand pose estimation in the wild"

You can download the full paper from here

3D Hand pose estimation on a wild youtube video

Overview

Our method the enables the real-time estimation of the full 3D pose of one or more human hands using a single commodity RGB camera. Recent work in the area has displayed impressive progress using RGBD input. However, since the introduction of RGBD sensors, there has been little progress for the case of monocular color input.

We capitalize on the latest advancements of deep learning, combining them with the power of generative hand pose estimation techniques to achieve real-time monocular 3D hand pose estimation in unrestricted scenarios. More specifically, given an RGB image and the relevant camera calibration information, we employ a state-of-the-art detector to localize hands.

Subsequently we run a pretrained network that estimates the 2D location of hand joints (i.e by Gouidis et al or by Simon et al). On the final step, non-linear least-squares minimization fits a 3D model of the hand to the estimated 2D joint positions, recovering the 3D hand pose. Pipeline

Requirements

This work depends on a set (currently) closed source of C++ libraries developed at CVRL-FORTH. We provide Ubuntu 16.04 binaries for these libraries. Follow the instructions here to download them and set your environment properly.

You will need Python 3.x to run the scripts. The following python libraries are required:

sudo pip3 install numpy opencv-python

If you use the provided pretrained network for 2D Joint estimation (by Goudis et al) you will also need to istall tensorflow.

pip3 install tensorflow-gpu

NOTE: The script was tested with tensorflow 1.12.0 and CUDA 9.0

If you use the 2D joint estimator of Simon et al you will need to install Openpose and PyOpenPose. Follow the installation instructions on these projects.

Hand detector

On our paper we use a retrained YOLO detector to detect hands (left, right) and heads in the input image. The codebase in this project does not include that part of the pipeline. The example scripts use an initial bounding box and tracking to crop the user's hand in the images and pass it to the 2D joint estimator.

Usage

You can use the 3D hand pose estimation with any 2D joint estimator. We provide two different example scripts:

handpose.py

The handpose.py script uses the 2D hand joint estimator of Gouidis et al.

handpose_simon_backend.py

This script uses the 2D hand joint estimator by Simon et al. You will need to properly install Openpose and PyOpenPose before running this script.

Docker

Since this is a (very) old project the only good way to test it on modern linux distros is using docker.

You need to download cudnn7 deb packages from nvidia (requires registration) and place them in the cudnn folder. See here for details.

Finally go to the openpose_models folder and run the getModels.sh script to download the required openpose models.

You can use the devcontainer with vscode or build it on CLI with docker-compose. This will create an image with ubuntu16.04 and all required libraries to test the project. You can build and run it from CLI using the following commands:

docker-compose build
docker-compose up -d
xhost + 
docker exec -it devcontainer_dev_1  python3 handpose_simon_backend.py 

About

Using a single RGB frame for real time 3D hand pose estimation in the wild

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published