Tri-camera Stereo Vision

Faranak Shamsafar

This project is funded by Program for the Promotion of Junior Researchers at the University of Tübingen.

Stereo vision is a method to infer the third dimension of a scene using two 2D images. In particular, by computing the disparity of a matching point in the left and right images, the distance of the point from the camera viewpoint can be estimated. Stereo vision can be applied in a wide range of applications, including autonomous driving, robot navigation and bin picking. This technique, however, suffers from the challenge of occlusion, where only one camera is able to observe a particular part of the scene. Besides, with a fixed baseline, the range of the estimated depth is limited and the error of depth estimation increases quadratically as the object is more distant. In this work, stereo vision is investigated by adding a viewpoint from a third camera. By having three cameras, there are shorter and longer baselines, which help in handling the problems of occlusion and lower depth accuracy of distant objects.