Abstract
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre-segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene’s 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth-sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state-of-the-art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi-interactive and automatic single image depth recovery methods.