Synthesizing Training Images for Boosting Human 3D Pose Estimation


3D Vision 2016


Wenzheng Chen1    Huan Wang1    Yangyan Li2    Hao Su3    Zhenhua Wang4   
Changhe Tu 1    Dani Lischinski 4    Daniel Cohen-Or2     Baoquan Chen1


1Shandong University   2Tel Aviv University   3Stanford University   4Hebrew University of Jerusalem



We design a scalable human image synthesis pipeline, which can be used for generating millions of human images with their corresponding 2D and 3D pose annotations. In this pipeline, first, human 3D poses are captured from a MoCap system, or inferred from human annotated 2D poses. Next, we employ SCAPE to generate human models. Finally, we use Blender to render models into images and attach them on the background.

 

Abstract


Human 3D pose estimation from a single image is a challenging task with numerous applications. Convolutional Neural Networks (CNNs) have recently achieved superior performance on the task of 2D pose estimation from a single image, by training on images with 2D annotations collected by crowd sourcing. This suggests that similar success could be achieved for direct estimation of 3D poses. However, 3D poses are much harder to annotate, and the lack of suitable annotated training images hinders attempts towards end-to-end solutions. To address this issue, we opt to automatically synthesize training images with ground truth pose annotations. Our work is a systematic study along this road. We find that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data. We present a fully automatic, scalable approach that samples the human pose space for guiding the synthesis procedure and extracts clothing textures from real images. Furthermore, we explore domain adaptation for bridging the gap between our synthetic training images and real testing photos. We demonstrate that CNNs trained with our synthetic images out-perform those trained with real photos on 3D pose estimation tasks.



Downloads

 

1 1 1 1 1
Paper Supplementary Materials Model image_mean.binaryproto Training Data (32.4 GB)
We release our model and part of training data. This model is a Caffe model and it is fine-tuned by bvlc_alexnet model. The training data contains about 1.5 million images with their corresponding 2D and 3D pose annotations. Also, you can use the code below to generate your own data.

 

Code

 

Human 3D+


We create Human3D+, a new richer dataset of images with 3D annotation, captured in both indoor and outdoor scenes such as room, playground, and park, and containing general actions such as walking, running, playing football, and so on. We capture 25 human action sequences in total. Each lasts 2-3 minutes. Every sequence contains video and bvh motion file. If you are interested in it, please contact chen1474147@gmail.com.


 

Acknowledgement


We are grateful to Jie Mao for providing us with SCPAE code.

 

We thank the anonymous reviewers for their valuable comments.


 

BibTex


@inproceedings{Deep3DPose,
     title = {Synthesizing Training Images for Boosting Human 3D Pose Estimation},
     author = {Wenzheng Chen and Huan Wang and Yangyan Li and Hao Su and Zhenhua Wang
     and Changhe Tu and Dani Lischinski and Daniel Cohen-Or and Baoquan Chen},
     booktitle = {3D Vision (3DV)},
     year = {2016},
}