Filtered Stochastic Shadow Mapping Using a Layered Approach

Abstract

Given a stochastic shadow map rendered with motion blur, our goal is to render an image from the eye with motion-blurred shadows with as little noise as possible. We use a layered approach in the shadow map and reproject samples along the average motion vector, and then perform lookups in this representation. Our results include substantially improved shadow quality compared to previous work and a fast graphics processing unit (GPU) implementation. In addition, we devise a set of scenes that are designed to bring out and show problematic cases for motion-blurred shadows. These scenes have difficult occlusion characteristics, and may be used in future research on this topic.

Thumbnail image of graphical abstract

An octopus in motion casting a complex motion-blurred shadow rendered by our algorithm. With the same input samples, our algorithm has significantly less noise compared to time-dependent shadow maps (TSM). At equal time, the noise level is still largely reduced. The animated octopus mesh is taken from the Alembic source distribution.