EUROGRAPHICS 2015
1Shandong University 2Tel Aviv University 3Hebrew University of Jerusalem
Given a segmented image as input we produce a stereo pair (or a motion parallax animation) by hallucinating plausible 3D geometry for the scene. First, segments are depth-sorted using simple depth and occlusion cues. Next, the geometry and texture of each object is completed using symmetry and convexity priors. Finally, we infer a depth placement for each object. We urge the reader to view the companion video before reading the paper, and to examine our anaglyphs in full size using red/cyan glasses. All of the results in this paper are available in the supplementary material.
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre-segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene’s 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth-sorting the segments, each of which is assumed to represent a separate object in the scene, resulting ina collection of depth layers. The shapes and textures of the partially occluded segments are then completed usingsymmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylindersyielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state-of-the-art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi-interactive and automatic single image depth recovery methods.
We are grateful to Guillem Palou for providing us with executable code, to Kevin Karsch, Bryan C. Russell and Antonio Torralba (LabelMe3D) for providing their source codes online, and to Zhaoyin Jia for suggestions on implementing their method. Images in this paper are collected from Flickr, Google Images and Make3D dataset.
We thank the anonymous reviewers for their valuable comments. This work was supported by NSFC (National Natural Science Foundation of China, No.2015CB352500, 61332015 and 61272242), the Israel Science Foundation and the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
@Article{StereoHallucination,
Title = {Hallucinating Stereoscopy from a Single Image},
Author = {Qiong Zeng and Wenzheng Chen and Huan Wang and Changhe Tu and Daniel Cohen-Or and Dani Lischinski and Baoquan Chen},
Journal = {Computer Graphics Forum (Proc. of Eurographics)},
Year = {2015},
Number = {2},
Pages = {to appear},
Volume = {34}
}