• Aucun résultat trouvé

INTEGRAL PHOTOGRAPHY SYSTEM 69

Dans le document 3D Videocommunication (Page 94-98)

Nagoya University, Japan

4.6 INTEGRAL PHOTOGRAPHY SYSTEM 69

Figure 4.16 A baisc IP method (Reproduced by permission of Dr. Fumio Akano and Makoto Okui, NHK from Okui and Okano 2003): (a) pickup; (b) display

produce an IP image composed of many elemental images, a lens array with many convex microlenses is positioned immediately in front of the photographic film. Numerous small elemental images are focused onto the film, with their number corresponding to that of the microlenses. The film is then developed to obtain a transparent photograph. For reproduction, this transparent photograph is placed where the film was and is irradiated from behind by diffused light. The light beams passing through the photograph retrace the original routes and converge at the point where the photographed object was, forming an autostereoscopic image.

The basic IP method has the following problems. The first is that the reconstructed images are viewed from the opposite direction to that of the image pickup. The second problem is optical cross-talk between elemental images, which causes double or multiple image interference. The third problem is the resolution of the reproduced image.

NHK developed a real-time IP system as shown in Figure 4.17, solving these problems. The film and the transparent photograph in Figure 4.16 are replaced by a high-definition television (HDTV) camera and a LCD, respectively, to realize a real-time system. Furthermore, a gradient-index (GRIN) lens (Arai et al. 1998) is introduced as the pickup lens array to avoid the first and second problems. A depth control lens (Okano et al. 1999), which is a large-aperture convex lens, is employed to solve the third problem.

It should be noted that the IP System and FTV are based on the same principle because the elementary images of the IP System are the same as the ray-space of FTV. The difference

Figure 4.17 Configuration of a real-time IP system (Reproduced by permission of Dr. Fumio Akano and Makoto Okui, NHK from Okui and Okano 2003)

Table 4.2 Specifications of 1D-II 3D display system (Tairaet al.2004)

Panel 20.8-inch LCD 15.4-inch LCD

Panel 3200×2400 1920×1200

Resolution (QUXGA) (WUXGA)

Number of 32 18

Parallaxes

3D resolution 300×800 300×400

Viewing angle ±10 ±14

Luminance 160 cd/m2 150 cd/m2 Contents Still Images Movies/interactive

between the two systems is the sampling density of light rays in the location and direction of the ray-space. In the IP System, the sampling density in the location is dense and that in the direction is sparse. On the other hand, in FTV, the sampling density in the location is sparse and that in the direction is dense. This is why the IP System does not need view interpolation and the resolution of the view image is low. The resolution of the view image has been improved by using a super HDTV camera instead of the HDTV camera.

4.6.2 1D-II 3D Display System

This system was developed by Toshiba and called the ‘1D Integral Imaging (1D-II) 3D Display System’ (Fukushima et al. 2004; Saishu et al. 2004; Taira et al. 2004). Two types of 3D PC-based display systems were developed, which are 20.8 and 15.4 inches diagonal size, using one-dimensional integral imaging. The specifications of the systems are listed in Table 4.2.

Horizontal resolution of 300 pixels with 32 parallaxes in the 20.8-inch type and 18 parallaxes in the 15.4-inch type was achieved by mosaic pixel arrangement of the display panel. These systems have wide viewing area and a viewer can observe 3D images with continuous motion parallax, suppressing the flipping of 3D images observed in conventional multi-view display systems. By using lenticular sheets, these systems realized high brightness in spite of having many parallaxes.

Plug-ins of PC for creating 3D image for CG software was also developed. Movie of CG contents and real-time interaction are demonstrated in the 15.4-inch type. The processing flow of the real time interaction is shown in Figure 4.18.

4.7 SUMMARY

Free viewpoint systems have been the stuff of dreams. Now, many of such systems have become available as shown in this chapter. Their cost is still high because they need a large amount of memory and a large computational cost. However, the rapid progress of technologies will make such systems to be realized more easily soon.

The free viewpoint systems are very important not only in application, but also academi-cally because such systems eventually need frontier technologies that treat rays one by one

REFERENCES 71

Figure 4.18 Processing flow of the real-time interaction of 1D-II 3D display system (Reproduced by permission of the Committee of 3D Conference, Japan, from Tairaet al. 2004)

in acquisition, processing and display. In this chapter, we have seen some examples of such technologies as ray acquisition, ray processing and ray display. Other types of ray acquisi-tion and ray display systems have also been reported (Endoet al.2000; Fujii and Tanimoto 2004). This work will open a new frontier in image engineering, namely ‘Ray-Based, Image Engineering’.

REFERENCES

Arai J, Okano F, Hoshino H and Yuyama I 1998 Gradient-index lens array method based on real-time integral photography for three-dimensional images.Applied Optics,37(11), 2034–2045.

Chen WC, Bouguet JY, Chu MH and Grzeszczuk R 2002 Light field mapping: efficient representation and hardware rendering of surface light fields.ACM Transactions on Graphics21(3), 447–456.

Debevec P, Yu Y and Borshukov G 1998 Efficient view-dependent image-based rendering with projective texture mapping.Proceedings ACM SIGGRAPH’98, USA, 105–116.

Droese M, Fujii T and Tanimoto M 2004 Ray-space interpolation constraining smooth disparities based on loopy belief propagation.Proceedings of the 11th International Workshop on Systems, Signals and Image Processing ambient multimedia (IWSSIP), Poland, 247–250.

Endo T, Kajiki, Y, Honda T and Sato M 2000 Cylindrical 3-D video display observable from all directions.Proceedings of Pacific Graphics, 300–306.

Fujii T and Tanimoto M 2002 Free-viewpoint television based on the ray-space representation.SPIE ITCom, 175–189.

Fujii T and Tanimoto M 2004 Real-time ray-space acquisition system.Proceedings of SPIE,5291, USA, 179–187.

Fujii T, Kimoto T and Tanimoto M 1996 Ray space coding for 3D visual communication.Proceedings of Picture Coding Symposium, 447–451.

Fukushima R, Taira K, Saishu T and Hirayama Y 2004 Novel viewing zone control method for computer generated integral 3-D imaging.Proceedings of SPIE,5291(A62).

Goldlucke B, Magnor M and Wilburn B 2002 Hardware accelerated dynamic light field rendering.

VMV2002.

Higashi T, Fujii T and Tanimoto M 2004 Three-dimensional display system for user interface of FTV.

Proceedings of SPIE, 5291A(25), USA.

http://www.ponycanyon.co.jp/eizoplay/.

http://www.ri.cmu.edu/events/sb35/tksuperbowl.html.

http://www.sonymusic.co.jp/VisualStore/WonderZone/.

Kanade T, Rander PW and Nayaranan PJ 1997 Virtualized reality: constructing virtual worlds from real scenes.IEEE Multimedia4(1), 34–47.

Kobayashi T, Fujii T, Kimoto T and Tanimoto T 2000 Interpolation of ray-space data by adaptive filtering.Proceedings of SPIE, 3958, USA, 252–259

Levoy M and Hanrahan P 1996 Light field rendering.Proceedings of SIGGRAPH, 31–42.

Lippmann G 1908Comptes-Rendus Academie des Sciences,146, 446–451.

Matsuyama T and Takai T 2002 Generation, visualization, and editing of 3D video.Proceedings of the Symposium on 3D Data Processing Visualization and Transmission, Italy, 234–245.

Matsuyama T, Wu X, Takai T and Nobuhara S 2003 Real-time generation and high fidelity visualization of 3D video.Proceedings of MIRAGE2003, 1–10.

Matsuyama T, Wu X, Takai T and Wada T 2004 Real-time dynamic 3-D object shape reconstruction and high-fidelity texture mapping for 3-D video IEEE Transactions on Circuits and Systems for Video Technology,14(3), 357–369.

Matusik W and Pfister H 2004 3D-TV a scalable system for real-time acquisition, transmission and autostereoscopic display of dynamic scenes.Proceedings of ACM SIGGRAPH, 814–824.

Müller K, Smolic A, Droese M, Voigt P and Wiegand T 2003a Multi-texture modelling of 3D traffic scenes.Proceedings of ICME 2003, USA, 6–9.

Müller K, Smolic A, Droese M, Voigt P and Wiegand T 2003b 3D modelling of traffic scenes using multi-texture surfaces.Proceedings of PCS 2003, France, 309–314.

Na Bangchang P, Fujii T and Tanimoto M 2003 Experimental system of free viewpoint television.

Proceedings of SPIE5006, USA, 554–563.

Na Bangchang P, Fujii T and Tanimoto M 2004 Ray-space data compression using spatial and temporal disparity compensation.Proceedings of IWAIT, Singapore, 171–175.

Nayaranan PJ Rander PW and Kanade T 1998 Constructing virtual worlds using dense stereo. Pro-ceedings of IEEE 6th International Conference on Computer Vision, India, 3–10.

Okano F, Arai J, Hoshino H and Yuyama, 1999 Three-dimensional video system based on integral photography.Optic Engineering,38(6), 1072–1077.

Okui M and Okano F 2003 Real-time 3-D imaging method based on integral photography.Report (for AHG on 3DAV),ISO/IEC JTC 1/SC 29/WG 11m10088, Australia.

Panahpour Tehrani M, Na Bangchang P, Fujii T and Tanimoto M 2004 The optimization of dis-tributed processing for arbitrary view generation in camera sensor networks. IEICE Transactions on Fundamentals of Electronics, Communication and Computer Sciences, E87-A,8, 1863–1870.

Saishu T, Taira K, Fukushima R and Hirayama Y 2004 Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups. Proceedings of SID, 53(3), USA, 1438–1441.

Saito H, Baba S, Kimura M, Vedula S and Kanada T 1999 Appearance-based virtual view generation of temporally-varying events from multi-camera images in the 3D room.Proceedings of 3-D Imaging and Modelling, 516–525.

Sekitoh M, Fujii T, Kimoto T and Tanimoto M 2001 Bird’s eye view system for ITS.Proceedings of the IEEE Intelligent Vehicle Symposium, 119–123.

Sekitoh M, Kutsuna T, Fujii T, Kimoto T and Tanimoto M 2003 Arbitrary view generation by model-based interpolation in the ray space.Proceedings of SPIE4660(58), USA, 465–474.

REFERENCES 73

Dans le document 3D Videocommunication (Page 94-98)