Although the digital photography industry is expanding rapidly, most digital cameras still look and feel like film cameras, and they offer roughly the same set of features and controls. However, as sensors and in-camera processing systems improve, these cameras will begin to offer capabilities that film cameras never had. Among these will be the ability to refocus photographs after they are taken (see the example above), or to combine views taken with different camera settings, aim, or placement. Equally exciting are new technologies for creating efficient, controllable illumination. Future "flashbulbs" may be pulsed LEDs or video projectors, with the ability to selectively illuminate objects, recolor the scene, or extract shape information. These developments force us to relax our notion of what constitutes "a photograph." They also blur the distinction between photography and scene modeling. These changes will lead to new photographic techniques, new scientific tools, and possibly new art forms.
In this course we will survey the converging technologies of digital photography, computational imaging, and image-based rendering, and we will explore the new imaging modalities that they enable.
You register for the exercises by registering in ILIAS and submitting the results for the first exercise sheet. Details about how to submit which information are on the first exercise sheet.