Human beings see the different viewpoint images of an object through two eyes, the right and left. Then, the human brain recognizes the 3D stereopsis of the object by synthesizing them with the binocular disparity of a stereo input image pair. Most conventional 3D display systems have been implemented by imitating this human visual system, which includes stereoscopic and autostereoscopic 3D displays. In case of stereoscopic 3D displays, the viewer is required to wear special glasses such as anaglyphs, polarized or shutter glasses for separated reception of the left and right images on the eyes. But in auto-stereoscopic 3D displays a 3D image can be presented to the viewer without the need for any special glasses. Optical elements such as lenticular sheets or parallax barriers are attached to display panels and they collect the left and right images displayed on the panel and deposit them at the appropriate eyes without any interference between them.
But, whether they are stereoscopic or auto-stereoscopic 3D displays, these approaches have been the cause of some fundamental problems such as eye fatigue and dizziness since C. Wheatstone invented the first Stereoscope in 1843. This is the main shortcoming of the technology, and the main reason why they still can't gain wide acceptance until now. The ultimate goal of 3D R&D is to develop a technology that can display a lifelike 3D image in real space having an image quality comparable to that of a highdefinition 2D image without requiring the viewer to wear any special glasses. Eventually, this advance will provide the viewers with the feeling of being present in the scene.
For that purpose some real 3D display technologies have been developed. One of them is the hologram. This holographic 3D display, which is totally different from the conventional stereoscopic approach, has been regarded as one of the attractive approaches for creating the most authentic illusion for observing volumetric 3D objects. It is because holographic technology can supply high-quality images and accurate depth cues viewed by human eyes without any special observation devices.
Holography was developed by D. Gabor in 1947. He recognized that when a suitable coherent reference wave is present simultaneously with the light scattered from a 3D object, then information about the amplitude and phase of the scattered waves can be recorded, in spite of the fact that recording media responds only to light intensity. He demonstrated that, from such a recorded interference pattern, which he called a hologram, meaning a total recording, an image of the original 3D object can ultimately be obtained.
However, recording a hologram of a real object requires some wave interference between two laser beams with a high degree of coherence between them in a dark room. Therefore, this system must be kept very stable since even a very slight movement can destroy the interference fringes, in which both intensity and phase information of the 3D object are contained. These requirements, together with the development and printing processes, have prevented conventional holography from becoming widely employed in the field. As a solution for these limitations of conventional holography, a computer-generated hologram (CGH) has been suggested.
A CGH is a digital hologram generated by computing the interference pattern produced by the object and the reference waves. Using this CGH pattern, an electro- holographic 3D display system can be constructed. In this approach, a ray-tracing method has been originally employed for calculating the contributions at the hologram plane from each object point source. That is, an object image to be generated can be approximated as a collection of self-luminous points of light, therefore the fringe patterns for all object points are calculated with the ray-tracing method and added up to obtain the whole interference pattern of the object image.
Now, CGH is regarded as an emerging technology, made possible by increasingly powerful computers, that avoids the interferometric recording step in conventional hologram formation. Instead, a computer calculates a holographic fringe pattern that it then uses to set the optical properties of a spatial light modulator (SLM), such as a liquid crystal microdisplay. The SLM then diffracts the readout light wave in a manner similar to the standard hologram to yield the desired optical wavefront.
Compared to conventional holographic approaches, CGH does not rely on the availability of specialized holographic recording materials and it can synthesize optical wavefronts without having to record a physical manifestation of them, and offers unprecedented wavefront control by making it easy to store, manipulate, transmit, and replicate holographic data. Although CGH-based display systems can be built today, their high cost makes them impractical for many applications. However, as the computer and optical hardware costs decrease, CGH displays will become a viable alternative in the near future.
CGH provides flexible control of light, making it suitable for a wide range of display types, including 2D, stereoscopic, auto-stereoscopic, volumetric, and true 3D imaging. CGH-based display technology can produce systems with unique characteristics impossible to achieve with conventional approaches.
In 2003, the University of Texas suggested the possibility of the display of dynamic holographic images by computing the holograms of objects in a threedimensional scene and then transcribing the 2D digital hologram onto a digital micro-mirror device (DMD) illuminated with coherent light. In 2005, QinetiQ laboratories of the United Kingdom presented true full-color 3D images generated using the specially-designed large pixel-count CGH systems. They calculated computergenerated holograms using a 102-node PC Linux cluster of dual IA-32 Pentium III 1.26-GHz Tualatin-core 512-Kb cache CPUs, with 1 Gb of memory per node and two 36-Gbyte Ultra160 SCSI disk drives, along with a Myrinet interconnect and a 7.5-Tbyte storage area network. They showed monochromatic and color, fullparallax 3D images produced by a computer generated hologram having 108 pixels.
In 2007, a team of MIT scientists developed a prototype for a small, inexpensive, holographic video system that works with consumer computer hardware such as PCs or gaming consoles, thereby enabling users to view images in three dimensions. The Mark III is the third generation of holographic video displays that MIT has developed since the early 1990s and it currently offers only monochromatic images, and its viewing volume is equivalent to an 80-mm cube, too small for practical applications such as PCs.
3DRC in Kwangwoon University also developed an incoherent holographic video system using a modified triangular interferometer. In this system, complex holograms without bias and conjugate images for 3D objects can be obtained by controlling the combination of wave plates and, through optical reconstruction of this complex hologram by a modified Mach- Zehnder interferometer, 3D images can be viewed.
More recently, SeeReal Technologies of Germany demonstrated a new holographic display at the SID 2007 conference. This holographic display prototype uses a 20 inch display that shows a real high-resolution 3D image in front of the screen. SeeReal Technologies uses a 30x30 pixel array for each of the 3D scene points, the so-called sub-hologram approach.
The versatility of CGH, combined with its unique ability to produce full-depth-cue 3D images at beyond eye resolution, floating in space, and with an extended color gamut has led some to label CGH the ultimate display technology. However, many CGH-based displays have an appetite for pixels that can far exceed other display types. Unique additional computational operations add to the cost of such systems, particularly high-frame-rate interactive systems. Thus, for many applications, lower-cost, simpler display technologies will be more appropriate. Nevertheless, with computer systems and display hardware continuing to decrease in price and other required technologies rapidly advancing, the question is not whether CGH systems will become a practical generic display technology but, rather, how soon.