Fachbereich Informatik

Fachbereich Informatik - Aktuell

29.03.2018

Disputation Johannes Sebastian Wulff

am Freitag, 13. April 2018 um 16 Uhr in Raum A 104, Sand 1, EG

Model-based Optical Flow: Layers, Learning, and Geometry

Berichterstatter 1: Prof. Michael Black
Berichterstatter 2: Prof. Hendrik Lensch

The estimation of motion in video sequences establishes temporal correspondences between pixels and surfaces and allows reasoning about a scene using multiple frames. Despite being a focus of research for over three decades, computing motion, or optical flow, remains challenging due to a number of difficulties, including the treatment of motion discontinuities and occluded regions, and the integration of information from more than two frames. One approach to address this uses layered models, which represent the occlusion structure of a scene and provide an approximation to the geometry. Building on classical layered methods, this talk will demonstrate ways to inject additional knowledge about the scene into layered models.
To this end, this talk will first present a generative, layered model for motion blurred video sequences, and show how this model can be used to simultaneously deblur a video and compute accurate optical flow for highly dynamic scenes containing motion blur. Then, we will consider the representation of motion within layers. Since, in a layered model, important motion discontinuities are captured by the segmentation into layers, the flow within each layer varies smoothly and can be approximated using a low dimensional subspace. The combination of the layered model and the low-dimensional subspace gives the best of both worlds, sharp motion discontinuities from the layers and computational efficiency from the subspace.
Lastly, we show how layered methods can be dramatically improved using simple semantics. Instead of treating all layers equally, a semantic segmentation divides the scene into its static parts and moving objects. Static parts of the scene constitute a large majority of what is shown in typical video sequences; yet, in such regions optical flow is fully constrained by the depth structure of the scene and the camera motion. After segmenting out moving objects, we can explicitly reason about the structure of the scene and the camera motion, yielding much better optical flow estimates. Furthermore, computing the structure of the scene allows
Der Dekan

Back