Read Microsoft Word - Stereoscopic_displays_Hua.doc text version

Stereoscopic Displays

Hong Hua Optical Sciences Center, University of Arizona, Tucson, AZ 85721 Email: [email protected]

Rapid developments in information technology and the increasing complexity and dimensionality of datasets have led to ever increasing demand for innovative display techniques. Stereoscopic displays allow observers to perceive depth effects and empower them to visualize information in a three-dimensional (3D) space. The proliferation of 3D displays has been mainly driven by the tremendous potential of virtual and augmented reality (VR/AR) for a wide spectrum of application areas such as scientific visualization, engineering design, training and education, and entertainment. Improvements in 3D graphics capabilities on personal computers (PC) have further expedited the popularity of 3D techniques. The subject of this article is to provide an overview of state-of-the-art stereoscopic displays and to briefly describe their technical approaches.

Keywords: Stereoscopy, stereoscopic displays, 3D displays, spatial vision

Principle of 3D viewing Human eyes rely on many visual cues to perceive and interpret depth in the real world. Such depth cues can be monocular or binocular. Monocular depth cues are observed only with one eye and common examples include perspective, occlusion, texture gradients, distribution of light and shadows, and motion parallax. Binocular depth perception is based on displacements (i.e. binocular disparity) between the projections of a scene object onto the left and right retina due to eye separation. The binocular disparity is processed by the brain, giving the impression of relief in an effect known as stereopsis. Stereoscopic displays enable depth sensation by exploring the binocular disparity.

Taxonomy of 3D displays The majority of the existing 3D displays recreate stereoscopic depth sensation by presenting the eyes with separate 2D images of the same scene generated from two slightly different

viewpoints. The key to 3D displays is a mechanism to present the left and right images to the corresponding eyes without crosstalk. A taxonomy of display techniques is shown in Figure 1. The displays are classified into eye-aided displays and auto-stereoscopic displays. The eye-aided displays require a user to wear special goggles to enable a proper separation of the stereo images. The displays can be further categorized into head-attached displays and spatial displays. Head-attached displays mostly provide separate display elements for each eye and thus are referred to as non-shuttering displays. Spatial displays usually present a stereo pair on the same screen surfaces, and a multiplexing technique is thus required to make each image of the stereo pair visible to only one eye; these are known as shuttered displays. Auto-stereoscopic displays have image-separation techniques integrated into the displays and do not require any goggles to assist 3D viewing. Such approaches can be divided into parallax displays, volumetric displays, and holographic displays. Parallax displays present stereo pairs simultaneously and have an integrated image-separation mechanism to deliver multiple views directly to the correct eyes. Volumetric displays directly illuminate spatial points within a display volume by filling or sweeping out a volumetric image space. Holographic displays reconstruct light information emitted by a 3D object from interference fringes generated through a holographic recording process.

3D display techniques Head-attached displays. Although head-attached displays may take in various forms, their concept remains the same. A typical system consists per eye of an image source, an optical system, a housing unit by which the image source and the optics are attached to a user, and a tracking system to slave a dynamic viewpoint to the user's head and/or eye motions. As these devices are typically worn on the head and physically coupled with a user's head movement, they are referred to as head-mounted displays (HMDs). HMD designs may be further classified as immersive or see-through: the former refers to designs that block the direct real-world view, and the latter refers to designs that allow superposition of synthetic images onto the real world. While immersive HMDs are among one of the solutions to VR applications, see-through HMDs are currently the dominant display devices for AR applications. Two variations of HMDs, video see-through and optical seethrough HMDs, enable the superposition of computer-generated imagery onto real world views.

The basic forms of HMD optical design are eyepiece- and objective-eyepiece combination ­ type magnifiers. An excellent review of HMD design can be found in [Melzer & Moffitt, 1997]. One of the grand challenges in HMD design is the trade-off between resolution and field-of-view (FOV). Several approaches--for instance, physical or optical tiling of multiple displays and fovea-contingent high-resolution inset schemes--have been researched to pursue high-quality designs. However, these schemes lead to complexity, bulkiness, and increased cost. A recent advancement, referred to as head-mounted projection display (HMPD), is the replacement of eyepiece optics with projection optics accompanied with retro-reflective screens, leading to miniaturization of the HMD optics as well as wider FOV than traditional HMD designs. Overall advances in HMD optical design capitalize on more readily available emerging technologies such as plastic lenses, aspheric surfaces, and diffractive optical elements. The inherent portability of HMDs finds its application in wearable computing and outdoor visualization, which in turn demands for brighter displays with improved portability. The emergence of novel microdisplays, such as LCOS (Liquid Crystal on Silicon), OLEDs (Organic Light Emitting Diodes), and TMOS (Time Multiplex Optical Shutter), offer potentially brighter imaging at higher resolution. An alternative is the retinal scanning display (also known as virtual retinal display) which replaces the microdisplay image source with a modulated low-power laser beam and associated scanning systems to write individual pixels directly onto the retina of the human eye. The scanning displays potentially offer brighter and more compact designs. Alternatively, some displays are floor or ceiling mounted and an observer can use the device by holding a handle rather than wearing heavy displays directly. For instance, BOOM-like systems (Binocular Omni-Orientation Monitor) are personal immersive displays that offer stereoscopic capability on a counterbalanced, motion-tracking support structure for practically weightless viewing. Their optical designs are essentially the same as HMDs, thus the above discussions on HMDs are readily applicable. Spatial displays. A stereo image pair is generally presented on the same screen surface for spatial displays. The major display elements are installed within the environment and thus not physically coupled with the user's head movement. Stereo images are either displayed sequentially at a doubled frame rate (i.e. field-sequential shutters) or concurrently at a regular frame rate (i.e. light-filtering shutters). When a time-sequential shutter is used, to avoid image

flickering and ghosting effects, the image source must have a refresh rate of 120Hz or higher and each image has to decay completely before the next field is displayed. The major shuttering techniques include color filters, polarization filters, and LCD shutter glasses. In color-multiplexed (anaglyph) displays, the left- and right-eye images are displayed as monochromatic pairs typically in near complementary colors (e.g. read-green or blue-red) and observers wear corresponding color-filtering glasses for image separation. In polarizationmultiplexed displays, stereo images are polarized before they are projected onto a screen surface and observers wear corresponding polarization filters in front of each eye. With LCD-shutter glasses, the left and right images are actively synchronized with the on-off status of the shutters. The synchronization can be hard-wired or remotely controlled via infrared emitter. The color and polarization filters are known as passive shuttering and can be used in both field-sequential and field-concurrent modes, while the LCD shutters are known as active shuttering and can only be used in sequential mode. When polarization filters are used in field-sequential mode, it is necessary to polarize the image source with a z-screen, a panel whose polarization can be alternated in such a way that it is synchronized with the left and right images. When polarization filters are used in field-concurrent mode, two passive polarizers are employed to beam polarized stereo images simultaneously onto a screen. In both configurations, the polarization directions of the left and right images are opposite and have to be in harmony with that of the polarizer glasses. Desktop displays. Using desktop monitors often with field-sequential shutters is the traditional desktop-VR approach, also known as fish-tank VR. Multiple monitors can be tiled together to create a panoramic display system. Mirrors or beam splitters can be employed with desktop display configurations to overlay 3D graphics with a physical workspace or merge views from multiple monitors. The desktop stereo systems are non-immersive, compared to the projection-based spatial displays, and their applications are limited to near-field operations and personal usage. Projection displays. The mainstream spatial displays extensively employ CRT, LCD, or digital light processor (DLP) projectors to cast stereo images onto single or multiple, planar or curved screen surfaces. Two types of projection exist: front and rear projections. In front projection, the projectors are located on the same side of the screen surface as the observer. Consequently, the observer might cast a shadow on the screens. In rear projection, the projectors are located on the opposite side of the screens to avoid shadowing. Metallic screen surfaces that

do not de-polarize incoming light are required for systems using polarization filters, as every organic material would reverse or depolarize the polarization direction and consequently would fail to separate stereo pairs. Since the introduction of CAVE, CAVE Automated Virtual Environments, many flavors of projection displays have been developed, including highly immersive surrounding displays and various embedded display systems [Cruz-Neira et al 1993]. Surrounding displays, such as CAVEs, CUBEs, Domes, and panoramic displays, are featured with multiple planar or single curved screen surfaces to encapsulate multiple users in an immersive virtual environment (VE). Embedded display systems, such as workbench and wall displays, integrate a single or a small number of screens to create a semi-immersive VE that is embedded within the surrounding real world. Auto-stereoscopic displays. Unlike eye-aided displays, auto-stereoscopic displays send stereo pairs directly to the correct eyes, light spatial points within a display volume, or reconstruct light information emitted from 3D object. These schemes indicate a dramatic change in display designs. Parallax displays. Two-dimensional CRT monitors or LCD panels, referred to as base displays, are overlaid with an array of light-directing elements that direct the emitted light from a screen pixel only to the correct eye. LCD pixels have higher positional accuracy and stability than CRTs, and consequently LCD panels are usually the primary choice of base displays. The pixels of a base display are divided into two or multiple groups, one group per viewpoint. The array of light-directing elements generates a set of viewing windows through which stereo images are observed by the corresponding eyes. In the case of multiple views, the pixel resolution per view is significantly lower than its base display. Typical examples of lightdirecting elements include parallax barrier and lenticular lens array [Halle, 1997]. Parallax barrier is the simplest approach to light-directing and its principle is illustrated in Figure 2-a. The left and right images are interlaced in columns on the base display and the parallax barrier is positioned so that the left and right image pixels are blocked except in the region of the left and right viewing windows. Lenticular sheet displays apply an array of optical elements such as cylindrical lenses that are arranged vertically relative to a 2D base display, and its principle is illustrated in Figure 2-b. The cylindrical lenses direct the diffuse light from a pixel so that it can only be seen in a limited angle in front of the display; thus they allow different pixels to be directed to a limited number of defined viewing windows.

Volumetric displays. Instead of presenting two separate 2D images, volumetric displays directly illuminate spatial points within a display volume by filling or sweeping out a volumetric image space, and they appear to create a transparent volume in space. Common to different volumetric displays is a volume or region occupying three physical dimensions within which image components may be positioned, depicted, and perhaps manipulated. Many of the volumetric display technologies impose minimal restriction on the viewing angle and thus an observer can move around and view 3D content from practically arbitrary orientation. Examples of volumetric displays include (1) solid state devices which display voxel data within a transparent substrate by generating light points with an external source (e.g. using two intersecting infrared laser beams with different wavelength to excite the electrons to a higher energy level and thus emit visible light); (2) multi-planar volumetric displays which build up a 3D volume from a time-multiplexed series of 2D images via a swiftly moving or spinning display element; (3) Varifocal mirror displays that apply flexible mirrors to sweep an image of a CRT screen through different depth planes of an image volume. An excellent discussion of various volumetric display technologies can be found in [Blundel & Schwarz, 2000]. Holographic displays. Taking a fundamentally different approach, holographic displays reconstruct light emitted by a 3D object from interference fringes generated through a holographic recording process. The interference fringes, when appropriately illuminated, function as a complex diffractive grating that reconstructs both the direction and intensity of light reflected off the original object. However, an optical hologram can not be produced in real time and thus is not appropriate for dynamic displaying. An electronic holographic display computes 3D holographic images from a 3D scene description, and can potentially lead to real-time electronic holography, known as holovideo. It involves two main processes: fringe computation, in which the 3D description is converted into digital holographic fringes, and optical modulation, in which light is modulated by the fringes and output as 3D images. The grand challenge lies in the enormous amount of computation required by holography, because holographic fringes must be computed with a sample spacing of approximately 0.5 microns, rather than a sample spacing of 100 microns as in a regular 3D display. Various experimental methods, such as horizontalparallax-only, holographic bandwidth compression, and faster digital hardware, have led to the computation of fairly complex scene contents at interactive rate.

Conclusion Stereoscopic displays are intriguing subjects of research owing to their tremendous potential for applications. However, none of the existing technologies yet match the powerful vision of the human visual system in any aspect. For instance, the majority of the existing stereoscopic displays decouple the physiological actions of accommodation and convergence, few of them offer resolvability matching with fovea visual acuity, and few are capable of presenting natural occlusion cues cohesively and correctly. While much research is till needed to develop displays that satisfy the demanding human visual system without any side effects, this is nonetheless an exciting era to anticipate new display technologies about to emerge. Fueled by the explosive proliferation of information technology, stereoscopic display systems are reshaping how we explore science, conduct business, and advance technology, ultimately improving the ways we live and work.

Key Bibliography 1. J. E. Melzer & K. Moffitt (editors). Head Mounted Displays: Designing for the User. New York, Publisher: McGraw-Hill, 1997. 2. C. Cruz-Neira, D. J. Sandin, T. A. DeFanti. "Surround-screen projection-based virtual reality: the design and implementation of the CAVE," Computer Graphics (Proc. of SIGGRAPH 1993), pp. 135-142, 1993. Publisher: ACM Press/ACM SIGGRAPH. 3. M. Halle. "Autostereoscopic displays and computer graphics," Computer Graphics (Proceedings of SIGGRAPH'97), 31(2), pp. 58-62, 1997, Publisher: ACM Press/ACM SIGGRAPH. 4. Barry Blundell and Adam Schwarz. Volumetric three dimensional display systems. New York, Publisher: John Wiley & Sons, 2000.

Extra Bibliography 1. S. Pastoor and M. Wopking, "3-D displays: a review of current technologies," Displays, 17, p100-110, 1997. 2. J. P. Rolland and H. Fuchs, "Optical versus video see-through head-mounted displays in medical visualization," Presence: Teleoperators and Virtual Environments, 9(3), pp. 287­ 309, 2000. Publisher: MIT Presee, Cambridge, Mass., 2000.

3. H. Hua, Y. Ha, and J. P. Rolland, "Design of an ultra-light and compact projection lens," Applied Optics, 41(1), 97-107, 2003, Publisher: Optical Society of America. 4. M. Czernuszenko, D. Pape, D. Sandin, T. A. DeFanti, G. L. Dawe, & M. D. Brown. "The ImmersaDesk and infinity wall projection-based virtual reality displays," Computer Graphics, 31(2), 46-49, 1997. Publisher: ACM Press/ACM SIGGRAPH. 5. Mark Lucente. "Interactive three-dimensional holographic displays: seeing the future in depth," Computer Graphics (Proceedings of SIGGRAPH'97), 31(2), pp. 63-67, 1997. Publisher: ACM Press/ACM SIGGRAPH. 6. E. Downing, L. Hesselink, J. Ralston, R. Macfarlane, "A Three-Color, Solid-State, ThreeDimensional Display," Science, 273 (5279), pp. 1185- 1189, August 30, 1996. Publisher: American Association for the Advancement of Science.

Figure Captions

Figure 1: Figure 2 A taxonomy of stereoscopic displays. Parallax displays: (a) Principle of parallax barrier displays; (b) Principle of

lenticular array displays.


1. 2. 3.


Microsoft Word - Stereoscopic_displays_Hua.doc

8 pages

Find more like this

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate


You might also be interested in

Monthly 06-05
FINAL CUT PRO User's Manual
Microsoft Word - HMD_Pub_1.0.doc