Abstract
We present a data-driven method for automatically cropping photographs to be well-composed and aesthetically pleasing. Our method matches the composition of an amateur’s photograph to an expert’s using point correspondences. The correspondences are based on a novel high-level local descriptor we term the ‘Object Context’. Object Context is an extension of Shape Context: it is a descriptor encoding which objects and scene elements surround a given point. By searching a database of expertly composed images, we can find a crop window which makes an amateur’s photograph closely match the composition of a database exemplar. We cull irrelevant matches in the database efficiently using a global descriptor which encodes the objects in the scene. For images with similar content in the database, we efficiently search the space of possible crops using generalized Hough voting. When comparing the result of our algorithm to expert crops, our crop windows overlap the expert crops by 83.6%. We also perform a user study which shows that our crops compare favourably to an expert humans’ crops.
We present a data-driven method for automatically cropping photographs to be well-composed and aesthetically pleasing. Our method matches the composition of an amateur’s photograph to an expert’s using point correspondences. The correspondences are based on a novel high-level local descriptor we term the ‘Object Context’. Object Context is an extension of Shape Context: it is a descriptor encoding which objects and scene elements surround a given point. By searching a database of expertly composed images, we can find a crop window which makes an amateur’s photograph closely match the composition of a database exemplar. We cull irrelevant matches in the database efficiently using a global descriptor which encodes the objects in the scene.