[Colloquium] REMINDER: Talks at TTIC: Deqing Sun, Harvard

Dawn Ellis dellis at ttic.edu
Fri Mar 6 09:45:58 CST 2015


When:     Monday, March 9, 2015 at 11am

Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526

Who:       Deqing Sun, Harvard

Title:       From Pixels to Local Layers: Exploring Flexible
Representations for Motion
              Estimation

Abstract:
We live in a dynamic world where motion is ubiquitous. To make robots and
other intelligent agents to understand the world, we need to give them the
ability to perceive motion. Estimating image motion and segmenting the
scenes into coherently moving regions are two closely related problems but
are often treated separately. Motion actually provides an important cue to
identify surfaces in a scene, while segmentation may provide the proper
support for motion estimation. Despite decades of research efforts, current
methods still tend to produce large errors especially near motion
boundaries and in occlusion regions.

In this talk, I will start from introducing a probabilistic layered model
for joint motion estimation and segmentation. This model orders each moving
object (layer) in depth and explicitly constructs the occlusions between
layers. It explains segmentation using thresholded spatio-temporally
coherent support functions, and describes motion using globally coherent
but locally flexible priors. In this way, scene structures (segmentation),
instead of motion, are enforced to persist over time. Our method achieves
promising results on both the Middlebury optical flow benchmark and the MIT
layer segmentation dataset, particularly in occlusion regions.

Noting that "global" layered models cannot deal with too many layers or
capture mutual or self-occlusions, I will introduce a local layering
representation. It breaks the scenes into local layers and jointly models
the motion and occlusion relationship between local layers. By retaining
uncertainty on both the motion and the occlusion relationship, we can avoid
common local minima of motion-only or occlusion-only approaches. Our method
can thus handle motion and occlusion well for both challenging synthetic
and real sequences.

Advances in motion estimation have enabled new computational video
applications. I will talk about our recent work on interactive intrinsic
video decomposition as an example. We introduced a fast and temporally
consistent algorithm to decompose video sequences into reflectance and
illumination components. One key observation is that reflectance is an
intrinsic property of physical surfaces and tends to persist over time,
while lighting might vary. The temporally consistent decomposition results
open doors for more sophisticated video editing, such as retexturing and
lighting-aware compositing.

Host:  Greg Shakhnarovich,  greg at ttic.edu




-- 
*Dawn Ellis*
Administrative Coordinator,
Bookkeeper
773-834-1757
dellis at ttic.edu

TTIC
6045 S. Kenwood Ave.
Chicago, IL. 60637
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20150306/512ce02c/attachment.htm 


More information about the Colloquium mailing list