Wednesday, April 22, 15:00 - 17:30, Location: Show and Tell Area B
Dinei Florencio, Cha Zhang
Adding 3D perception is desired in a variety of applications. Many 3D clues are already routinely incorporated into 2D image or graphics, like shading, occlusion, relative size, etc. In general, the two most important clues missing are stereopsis (i.e., the image presented to the right and left eyes are different), and motion parallax (the image displayed changes with the user position).
Stereopsis can be provided through various stereoscopic displays, which is getting very popular in recent years. However, research studies have shown that motion parallax is equal if not more important than stereo in contributing to 3D perception. Nevertheless, the usage of motion parallax has been mostly limited to lab studies, primary due to the fact that it is generally expensive, and does not produce as an intriguing effect as stereopsis. For instance, most existing systems require the user to wear some type of tracking device, e.g., positioning sensor, LED, goggles, etc. This makes the head tracking easier, but it also makes it more expensive and more inconvenient.
In this demo, we present a new system, where we use head- and eye-tracking technology to add the parallax effect, by rendering the image for the corrected eye position. Note the tracking is performed with a simple web camera, and the user is not required to wear any device for the tracking.
While the technology can also be used for stereoscopic displays, we will demo it on a standard LCD display. In this case we will illustrate the added 3D sensation by adding motion parallax without stereo vision. We call the effect “Monocular 3D”. Although not as remarkable as true, stereoscopic displays, the hardware cost is essentially zero (we use just a legacy LCD display and a web cam). This makes the technology very appealing, as it can be applied to essentially all existing displays.
The system includes a standard display, and a camera pointing at the user. The camera is used to find the user’s (i.e., viewer’s) eyes position. Both the display and camera can be standard off-the shelf components. If a stereo display is available, it can be also used. Similarly, if a depth camera or multiple cameras are available, it may increase the robustness of the eye position estimation.