Printer Friendly

Visual fatigue reduction based on depth adjustment for DIBR system.

1. Introduction

Recently, three-dimensional television (3D TV) has drawn wide research interest [1]. More and more people desire a wonderful 3D experience through TV at home. In a 3D TV system, stereoscopic content plays an important role for constructing depth perception [2]. This gives an impetus drive to higher and higher quality 3D contents to be produced.

The conventional 3D contents are acquired and transmitted with the format composed of two streams of video [2][3]. However, even if the 3D contents represented by this format are filmed with high quality (e.g. high definition, precise control of the shooting conditions), they may also cause visual fatigue over different groups of people [4]. Excessive horizontal parallax, which often occurs when the content creators intend to provide stronger 3D perception, is one of the most important factors leading to visual fatigue [4][5][6][7]. Thus, it is important to provide viewers the ability to adjust the horizontal parallax (hence perceived depth) in real time to suit his/her own personal preferences [4].

Different from the conventional 3D video format, the newly Depth-image-based 3D video format, which consists of regular 2D color video and an accompanying depth-image sequence with the same spatial-temporal resolution [8], provides viewers the ability to adjust perceived depth by themselves. This format is required to use the so-called depth-image-based rendering (DIBR) technique to generate stereo pair in a 3D TV.

Several studies have been conducted on how to adjust perceived depth to avoid visual fatigue [6][9][10]. Most of them focuses on the conventional 3D video format and off-line processing. Recently, we provided a depth adjustment method for the depth-image-based 3D video format on condition that camera calibration parameters, such as intrinsic and extrinsic matrices, are known [11]. Subjective evaluations show that the depth adjustment method can generate comfortable stereoscopic images by changing calibration parameters. Different from these methods, this paper presents a new depth adjustment method with no calibration parameters needed for visual fatigue reduction for DIBR system. By analyzing 3D image warping, the perceived depth is expressed as a function of three adjustable parameters: virtual view number, scale factor and depth value of ZPS (zero parallax setting). As a result, the perceived depth can be easily adjusted by changing these parameters carefully for different viewers in real time, avoiding in this way visual fatigue for depth-image-based 3D video format.

The remaining portions of this paper are organized as follow. In section 2, we provide a 3D image warping equation that implies the horizontal sensor parallax for DIBR system. Section 3 is devoted to discuss on the relationship between horizontal sensor parallax and perceived depth, and the depth adjustment method based on this relationship is proposed. Section 4 provides a detailed discussion of experiments on visual fatigue when the proposed depth adjustment method applied to DIBR system. Conclusions can be found in Section 5.

2. 3D Image Warping

As mentioned above, if a 3D TV system adapts the depth-image-based 3D video format, then DIBR is required to be performed at the receiver side so as to create virtual views (destination images). The destination images are created by reference images and the corresponding depth images/maps, which usually includes three steps: pre-processing of depth image, 3D image warping and hole-filling [2], and the depth adjustment can be performed in 3D image warping.

[FIGURE 1 OMITTED]

Theoretically, arbitrary view can be synthesized by applying 3D image warping to reference image points. For simplicity, we only consider the commonly used shift-sensor camera setup. In this case, the vertical coordinate of the projection of any 3D points on each image plane is the same [4]. Let [u.sub.ref]([u.sub.ref], [v.sub.ref]) and [u.sub.des]([u.sub.des], [v.sub.des]) be the matching points (corresponding points) in reference image and destination image respectively, as shown in Fig. 1. Then the 3D image warping equations, which actually defines the relationships of matching points in reference and destination images, can be deduced from the pin-hole camera model:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

where n is the number of virtual view when multi views are generated using DIBR for an auto-stereoscopic display, r represents scale factor, [D.sub.zps] is the depth value of ZPS plane, and D([u.sub.ref],[v.sub.ref]) is the depth value of point ([u.sub.ref],[v.sub.ref]) in depth image. The ranges of these parameter values can be found in Table 1.

Let [D.sub.c] = [u.sub.des] - [u.sub.ref], [D.sub.c] is a variable in camera space, and usually called horizontal sensor parallax. [D.sub.c] plays a crucial role in visual fatigue reduction as the perceived depth changes when [D.sub.c] is changed, which will be addressed in detail in the next section.

Note that no camera calibration parameters, such as intrinsic and extrinsic matrices, are needed for generating virtual view when using (1). In addition, (1) is more facilitative for hardware implementation in contrast to the 3D image warping equations presented by [1], [2] and [12].

3. Depth Adjustment

The previous section gives a brief introduction of 3D image warping in DIBR system, and [D.sub.c] is introduced for depth adjustment. This section is devoted to investigating on how to change [D.sub.c] to adjust perceived depth and describing the adjustment method in detail.

3.1. Perceived Depth

The depth that the viewer perceived in viewer space is concerned with the geometry of 3D TV display system on which the generated images will be displayed, as shown in Fig. 2. Note that world coordinate system xyz is used for all objects, and the unit length is millimeter (mm). Without loss of generality, we assume that the z-axis of the world coordinate system passes through the screen center, and the viewer's left eye is located at [e.sub.l] = [[-[t.sub.x]/2 0 0].sup.T], and right eye, at [e.sub.r] = [[[t.sub.x]/2 0 0].sup.T] ([t.sub.x] > 0). Under this viewing condition, the origin o is located at the position of cyclopean eye. Let W = [[[x.sub.i], [y.sub.i], [Z.sub.i]].sup.T] denote a "virtual" point in viewer space. If observing W through [e.sub.l] and [e.sub.r], we will get stereo pairs [[x.sub.l] = [[x.sub.l] [y.sub.l] [z.sub.l]].sup.T] and [x.sub.r] = [[[x.sub.r] [y.sub.r] [z.sub.r]].sup.T], which lie in the screen. Thus, we have

[x.sub.r] - [x.sub.l] = [t.sub.x] [z.sub.i] - V/[z.sub.i]. (2)

[FIGURE 2 OMITTED]

Let [D.sub.s] = [x.sub.r] - [x.sub.l], [D.sub.s] is usually called horizontal screen parallax in contrast to horizontal sensor parallax. Meanwhile, let d = [z.sub.i] - V, d is the perceived depth which directly reflects the depth perception that the horizontal screen parallax brings to viewer's eyes. Accordingly, we have

d = [D.sub.s]V/[t.sub.x] - [D.sub.s]. (3)

It can be seen from (3) that, the depth d which the viewer perceived is determined by the horizontal screen parallax [D.sub.s]. Furthermore, [D.sub.s] is determined by the horizontal sensor parallax [D.sub.c]:

[D.sub.s] = [D.sub.c][W.sub.s]/[W.sub.i], (4)

where [W.sub.s] denotes the width of screen (in millimeters) and [W.sub.i] denotes the width of image (in pixels).

Substituting (1) and (4) into (3), we get:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

As for specific display and viewing conditions, variables in viewer space such as [W.sub.s], V and [t.sub.x] cannot be adjustable. Consequently, the perceived depth d is determined by [D.sub.c], i.e. depth adjustment can be realized by controlling the amounts of the horizontal sensor parallax of stereoscopic images. Owing that horizontal sensor parallax [D.sub.c] is expressed as a function of three adjustable parameters: virtual view number n, scale factor r and the depth value of ZPS [D.sub.zps], the depth d can be adjusted by changing these parameters during the processing of 3D image warping.

3.2. Depth Adjustment

Theoretically, we may change the value of [D.sub.c] (hence perceived depth) as long as we like by changing the parameters in (5). However, several studies suggest that the maximum horizontal sensor parallax which is still comfortable for viewing is approximately 5% of the width of a image [8][12]. So we get

-([W.sub.i] x 5%) [less than or equal to] [D.sub.c] [less than or equal to] [W.sub.i] x 5%, (6)

where [W.sub.i] and [D.sub.c] are measured in pixels, [W.sub.i] > 0, [D.sub.c] > 0. If [D.sub.c] does not satisfy the inequation, it will cause excessive horizontal parallax and may result in visual fatigue. So, we should insure the value of [D.sub.c] in accordance with (6) when performing depth adjustment.

A flowchart describing the proposed depth adjustment method is illustrated in Fig. 3. Three main steps are included:

a) Parameters for depth adjustment, such as n, r and [D.sub.zps] in (5), are specified by viewers;

b) Horizontal sensor parallax [D.sub.c] in camera space is evaluated to determine whether it satisfies (6). If it does not satisfy (6), then these parameters are modified by a proposed algorithm shown in Fig. 4; Otherwise, go to c);

c) Destination image is generated by (1) based on these parameters.

[FIGURE 3 OMITTED]

To help the viewers in specifying a proper set of initial parameter values in step a), we recommend the default values which may cause less visual fatigue based on our and other researcher's experiments, as shown in Table 1. More details about how these values are obtained can be found below.

From Table 1, it can be seen that the number of virtual view n varies from -4 to 4, since 9 views are usually needed for auto-stereoscopic display simultaneously [13]. If the DIBR system is the type of stereoscopic display using glasses, it is better to set n = 1, because only one virtual view is needed to be generated, and n = 1 simplifies the warping equation. Of course, the cases that n equals other integer within [-4, 4] are also accepted provided that (6) holds.

At present, a multi-view 3D display system typically uses an 8-bit depth image due to the limitations of current depth cameras and the enough capacity to maintain sufficient image quality [14]. Hence the depth value varies from 0 to 255. As a result, the value of [D.sub.zps] is within [0, 255]. Thus we have

0 [less than or equal to][absolute value of [D.sub.zps] - D ([u.sub.ref], [v.sub.ref])] [less than or equal to] 255. (7)

Substituting (1) and (7) into (6), we get

0 [less than or equal to][absolute value of r][less than or equal to] [W.sub.i] x 5% x 4096/255 x [absolute value of n], n [not equal to] 0 (8)

As can be seen from (8), when n = 1, we get the maximum value of scale factor r which is about [W.sub.1] x 80%. In order to reduce the probability of excessive horizontal parallax, the value of scale factor r is limited within [0, [W.sub.1] x 80%]. In our experiments, the initial value of r is set to [W.sub.i] x 30% (an empirical value) for all the viewers.

The choice of ZPS plane is determined by the setting of [D.sub.zps]. As the value of [D.sub.zps] which is comfortable for viewing should be set between a half and two thirds of maximum gray level [8], we set the default value of [D.sub.zps] to 130 for an 8-bit depth image.

Once the parameters for depth adjustment (n, r and [D.sub.zps]) are specified, (1) is used to evaluate the horizontal sensor parallax in step b). As for an 8-bit depth image, the value of [D.sub.c] of each point may vary from [D.sub.c_min] (D([u.sub.ref], [v.sub.ref]) = 0) to [D.sub.c_max] (D([u.sub.ref], [v.sub.ref]) = 255) according to (1). If either of them ([D.sub.c_min] and [D.sub.c_max]) does not satisfy (6), parameter r or [D.sub.zps] will be modified (n is usually not changed for it just indicates which view the user want to be rendered).

[FIGURE 4 OMITTED]

The modification of r or [D.sub.zps] is done as follows:

Parameter [D.sub.zps] is considered first. If the previous [D.sub.zps] does not equal the suggested value, then it is changed to the suggested value [D'.sub.zps], and [D.sub.c_min] and [D.sub.c_max] are re-calculated using the modified [D'.sub.zps]. If either of them still do not satisfy (6), it will suggest that the present value of r changes to [r.sub.min], which is determined by

[r.sub.min] = min(int([r.sub.1]), int([r.sub.2])) (9)

where "min" is the function solving minimum value of a set, and "int" function returns the integer part of a real number. Variable [r.sub.1] and [r.sub.2] can be calculated as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

Eq. (9) and (10) can be deduced from: Substituting (1) into (6) leads to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

As for specific display and viewing conditions, r is only determined by [absolute value of D([u.sub.ref], [v.sub.ref]) - [D.sub.zps]], and the value of [absolute value of D([u.sub.ref], [v.sub.ref]) - [D.sub.zps]] varies within (0, max([D.sub.zps], 255[D.sub.zps])]. Hence [r.sub.min] can be calculated by (9) and (10), and [r.sub.min] will surely satisfy (6).

Here we give a brief summary of why the visual fatigue can be reduced just by adjusting three parameters. After the proposed depth adjustment method applied, the horizontal sensor parallax [D.sub.c] is limited in a modest range by adjusting three parameters: virtual view number n, scale factor r and the depth value of ZPS [D.sub.zps]. Consequently, the perceived depths of objects in viewer space will fall within a comfort zone [15]. Accordingly, the accommodation-vergence conflict will be reduced. As a result, visual fatigue is reduced in the same stereoscopic viewing environment [7][15].

In addition, the three parameters affect horizontal sensor parallax (hence visual fatigue) in different ways. From (1) we can see a different linear relationship between each parameter and horizontal sensor parallax when the other parameters are constants. An illustration of these relationships can be found in Fig. 5. The trendlines of maximum and minimum horizontal sensor parallaxes reached when adjusting the three parameters respectively are plotted based on the image data of frame 0 from "Breakdance" sequences [16].

In Fig. 5-(a), (b) and (c), blue lines and red lines represent the maximum and minimum parallaxes which are comfortable for viewing, respectively. And green and violet lines represent the maximum and minimum parallaxes of the stereoscopic image reached when adjusting one of the three parameters. The horizontal axes in Fig. 5-(a), (b), and (c) show the parameters view number n, scale factor r and the depth value of ZPS [D.sub.zps] respectively, while the vertical axes show the parallax measured in pixels. In order to show the changes of maximum and minimum parallaxes caused by parameter adjustment respectively, parameter n is set to be changed gradually while the other two parameters r and [D.sub.zps] are kept unchanged (r = 300, [D.sub.zps] = 130) in Fig. 5-(a); in Fig. 5-(b), scale factor r varies while n = 1 and [D.sub.zps] = 130; in Fig. 5-(c), parameter [D.sub.zps] is increased gradually while r and n are constants (r = 300, n = 1).

From Fig. 5-(a), one can notice that the absolute values of maximum and minimum parallax increase with the increase of n. And similar relationship between the absolute values of maximum and minimum parallax and parameter r can be seen in Fig. 5-(b). In Fig. 5-(c), the absolute values of maximum and minimum parallax also vary with parameter [D.sub.zps]: the maximum parallax increases while the absolute value of minimum parallax decreases with the increasing of [D.sub.zps]. In general, there exists a different linear relationship between each parameter and horizontal sensor parallax.

[FIGURE 5 OMITTED]

4. Experiment

In this section, several depth-image-based 3D sequences [16][17] are used to test the proposed depth adjustment method for visual fatigue reduction. A 17-inch time-sharing stereoscopic display was used for the subjective assessment and ten non-expert viewers, aged 20 to 35, participated in the experiment. Twenty stereoscopic images which are generated with different values of parameters n, r, [D.sub.zps] (as shown in Table 2) are used in this experiment, consisting of outdoor scenes, indoor scenes, figures and geographic landscapes. The viewers watched randomly-ordered stereoscopic images at a distance of 1m from the display. The duration of each image was 10 seconds.

In our experiment, a five-level scale including ratings of no fatigue, slight fatigue, moderate fatigue, fatigue, and severe fatigue, was used for subjective assessment [4]. Note that "visual fatigue" in this paper means "visual discomfort", hence the five-level scale mentioned above is similar to the five point category rating scale [18]. When viewing a presentation, the viewer was asked to categorize the degree of their fatigue into five categories mentioned above. Fig. 7 shows the subjective assessment results for stereoscopic images which were generated with different parameter settings listed in Table 2. The horizontal axis shows the image number and the vertical axis shows the average visual fatigue score of each image marked by ten viewers. Note excessive horizontal parallaxes were introduced to some of these images when they were generated. Maximum and minimum parallaxes (i.e. [D.sub.c_min] and [D.sub.c_max]) of each stereoscopic image can be found in Fig. 8.

From Fig. 7, it can be seen that the average visual fatigue scores of image 4, 11 and 13 are lower than grade 2.5, and from Fig. 8, we find that the minimum parallaxes of image 4 and 13 exceed -[W.sub.i] x 5%, which indicates excessive horizontal parallax is liable to cause visual fatigue. The reason why the score of image 11 is lower than grade 2.5 (though the parallaxes are not exceed [absolute value of [W.sub.i] x 5%]) lies in that the depth image is so inaccurate that obvious geometric distortions appear in the virtual image [2][8]. It is these distortions that bring the uncomfortable feeling to the viewers [7][15]. Another example can be found in Fig. 6.

[FIGURE 6 OMITTED]

A special case in Fig. 8 is that, the maximum parallax of image 1 exceeds W*5%, yet the score is quit high (grade 3.2). This is mainly because most parallaxes of points in depth image of image 1 are positive, and positive parallax (objects behind the screen) is in general easier to look at and minimizes eye strain [15].

In order to reduce visual fatigue, the proposed depth adjustment method was applied to the test images. Fig. 9-(a) shows the assessment results of image 1, 4 and 13 before and after adjustment by the same ten viewers, after the proposed depth adjustment method was conducted. The new maximum and minimum parallaxes are shown in Fig. 9-(b). Fig. 10 shows the stereoscopic images of 1, 4, and 13 before and after depth adjustment. We notice that in Fig. 9-(a) the scores of image 1 and image 4 are considerably improved (from 3.2 to 3.5, 2.3 to 2.8, respectively), which shows the proposed method is efficient at reducing visual fatigue in most cases when excessive horizontal parallax occurs.

Note that the improvement in the score of image 13 is slight (from 2.1 to 2.2). This is mainly due to the obvious geometric distortions contained in the images shown in Fig. 10 (circled by red rings). As mentioned above, inaccurate depth image (especial smoothed depth image or the depth image obtained by 2D to 3D method from a single image) may introduce geometric distortions to the rendered image, and these distortions will result in severe visual fatigue if they are remarkable. In Fig. 10, the depth image (i) is so inaccurate in contrast to (g) and (h) that it leads to obvious distortions in the rendered image (l). For this case, the improvement of visual comfort based on depth adjustment is limited.

From the experiment we can draw the conclusion that, the visual fatigue can be efficiently reduced just by adjusting three parameters n, r, [D.sub.zps] which are obtained from the expression of [D.sub.c], provided that the geometric distortions contained in the images are imperceptible. right images of 1, 4 and 13 after depth adjustment; (m)~(o): overlapped images of 1, 4 and 13 without excessive horizontal parallax after depth adjustment.

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

5. Conclusions

A well-known problem in 3D TV is visual fatigue. In order to reduce the visual fatigue for DIBR system in 3D TV, a new perceived depth adjustment method in which the main principle is reducing the excessive horizontal parallax to avoid visual fatigue when performing 3D image warping, is proposed. In our method, perceived depth is expressed into a function of three adjustable parameters: virtual view number, scale factor and depth value of ZPS. Adjusting these three parameters can effectively change the perceived depth of generated stereo pairs. In addition, parameters scale factor and depth value of ZPS are enough for adjustment when time-sharing stereoscopic display is used. Another advantage of the method is that no camera calibration parameters are needed either for virtual view generation or for depth adjustment, which makes it suitable for current depth-image-based 3D video format used in 3D TV [19]. Moreover, the proposed method is more facilitative for hardware implementation in contrast to previous method since the depth adjustment can be performed in simple 3D image warping equations.

In this paper, experiments are also performed to verify the method. The experimental results show that the proposed depth adjustment method can generate comfortable stereoscopic images with different perceived depths that people desire. However, performance of the method may be affected by the artifacts in depth image or virtual view. In future work, we will consider this factor together with depth adjustment for visual fatigue reduction.

This work was supported by the Fundamental Research Funds for the Central Universities under Grant No. CDJZR10 18 00 13. The authors would like to thank the staff of Homwee Technology Co., Ltd, Changhong Group. They provided laboratory apparatus for the experiments, and helped to design the experiments. The authors also thank the volunteers who participated in the experiments.

http://dx.doi.org/ 10.3837/tiis.2012.04.013

References

[1] X. Yang, J. Liu, J. Sun, X. Li, W. Liu and Y. Gao, "DIBR based view synthesis for free-viewpoint television," in Proc. of 5th 3DTV Conference, pp.1-4, May.2011. Article (CrossRef Link)

[2] P.-J. Lee and Effendi, "Nongeometric distortion smoothing approach for depth map preprocessing," IEEE Transactions on Multimedia, vol.13, no.2, pp.246-254, Apr.2011. Article (CrossRef Link)

[3] A. Smolic, K. Mueller, P. Merkle and A. Vetro, "Development of a new MPEG standard for advanced 3D video applications," in Proc. of 6th International Symposium on Image and Signal Processing and Analysis, pp.400-407, Sep.2009. Article (CrossRef Link)

[4] D. Kim and K. Sohn, "Visual Fatigue Prediction for stereoscopic image," IEEE Transactions on Circuits and Systems for Video Technology, vol.21, no.2, pp.231-236, Feb.2011. Article (CrossRef Link)

[5] R. Liu, Q. Zhu, X. Xu, L. Zhi, H. Xie, J. Yang and X. Zhang, "Stereo effect of image converted from planar," INFORMATION SCIENCES, vol.178, no.8, pp.2079-2090, Apr.2008. Article (CrossRef Link)

[6] D. Kim and K. Sohn, "Depth adjustment for stereoscopic image using visual fatigue prediction and depth-based view synthesis," in Proc. of 2010 IEEE International Conference on Multimedia and Expo, pp.956-961, Jul.2010. Article (CrossRef Link)

[7] T. Bando, A. Iijima and S. Yano, "Visual fatigue caused by stereoscopic images and the search for the requirement to prevent them: A review," Displays, Sep.2011. Article (CrossRef Link)

[8] L. Zhang and W.J. Tam, "Stereoscopic Image Generation Based on Depth Images for 3D TV," IEEE TRANSACTIONS ON BROADCASTING, vol.51, no.2, pp.191-199, Jun.2005. Article (CrossRef Link)

[9] M. Kim, "Post-processing of multiview images: Depth scaling," in Proc. of 6th International Conference on Information Technology, pp.1275-1279, Apr.2009. Article (CrossRef Link)

[10] J. Choi, D. Kim, B. Ham, S. Choi and K Sohn, "Visual fatigue evaluation and enhancement for 2D-plus-depth video," in Proc. of 201017th IEEE International Conference on Image Processing, pp.2981-2984, Sep.2010. Article (CrossRef Link)

[11] R. Liu, H. Xie, G. Tai, Y. Tan, R. Guo, W. Luo, X. Xu and J. Liu, "Depth adjustment for depth-image-based rendering in 3D TV system," Journal of Information and Computational Science, vol.8, no.16, pp.4233-4240, Dec.2011. Article (CrossRef Link)

[12] T.-C. Lin, H.-C. Huang and Y.-M. Huang, "Preserving depth resolution of synthesized images using parallax-map-based dibr for 3D-TV," IEEE Transactions on Consumer Electronics, vol.56, no.2, pp.720-727, Jul.2010. Article (CrossRef Link)

[13] J. J. Hwang and H. R. Wu, "stereo image quality assessment using visual attention and distortion predictors," Ksii Transactions on Internet and Information Systems, vol.5, no.9, pp.1613-1631, 2011. Article (CrossRef Link)

[14] M. Kim, Y. Cho, H. G. Choo, J. Kim and K. S. Park, "Effects of depth map quantization for computer-generated multiview images using depth image-based rendering," ksii transactions on internet and information systems, vol.5, no.11, pp.2175-2190, 2011. Article (CrossRef Link)

[15] W. J. Tam, F. Speranza, S. Yano, K. Shimono and H. Ono, "Stereoscopic 3D-TV: Visual comfort," IEEE TRANSACTIONS ON BROADCASTING, vol.57, no.2, pp.335-346, Jun.2011. Article (CrossRef Link)

[16] C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. Winder and R. Szeliski, "High-quality video view interpolation using a layered representation," ACM SIGGRAPH and ACM Trans. on Graphics, vol.23, no.3, pp.600-608, Aug.2004. Article (CrossRef Link)

[17] G. Zhang, J. Jia, T.-T. Wong and H. Bao, "Consistent depth maps recovery from a video sequence," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.31, no.6, pp.974-988, Jun.2009. Article (CrossRef Link)

[18] F.L. Kooi and A. Toet, "Visual comfort of binocular and 3D displays," Displays, vol.25, no.2-3, pp.99-108, Sep.2004. Article (CrossRef Link)

[19] HDMI Licensing LLC, "High-Definition Multimedia Interface Specification Version 1.4," 2010.

Ran Liu (1,2,3), Yingchun Tan (1), Fengchun Tian (1), Hui Xie (1), Guoqin Tai (1), Weimin Tan (1), Junling Liu (1), Xiaoyan Xu (2), Chaibou Kadri (1) and Naana Abakah (1)

(1) College of Communication Engineering, Chongqing University, Chongqing 400044, China

(2) College of Computer Science, Chongqing University, Chongqing 400044, China

(3) Homwee Technology Co., Ltd, Changhong Group, Chengdu 610031, China

[e-mail: {ran.liu_cqu, hui.xie_cqu, junling.liu_cqu}@qq.com, {tanyingchunaaa, weimintan_cqu, lwy0713}@126.com, fengchuntian@cqu.edu.cn, taiguoqin@163.com, H_nare@hotmail.com, nabakah@gmail.com]

* Corresponding author: Ran Liu

Received February 3, 2012; revised April 5, 2012; accepted April 9, 2012; published April 25, 2012

Ran Liu received the B.E., M.E., and D.E. degrees in Computer Science from Chongqing University, Chongqing, China, in 2001, 2004, and 2007, respectively. He worked as post doctoral researcher in Homwee Technology Co., Ltd, Chengdu, China from 2008 to 2010. He is now a senior engineer in the Homwee Technology Co., Ltd. He is also a teacher in the College of Communication Engineering and the College of Computer Science, Chongqing University, China. His research interests include 3D TV, virtual reality and computer vision.

Yingchun Tan is currently pursuing her Master's degree in Communication Engineering, Chongqing University, Chongqing, China. She got her Bachelor's degree in Communication Engineering from Chongqing Uniersity, Chongqing, China, in 2008. Her research interests include 3D TV, digital signal processing.

Fengchun Tian received the B.E., M.E., and D.E. degrees in radio engineering, biomedical instruments and engineering, theoretical electric engineering from Chongqing University, Chongqing, P.R. China, in 1984, 1986, and 1996, respectively. Since 1984, he has been working in Chongqing University as a teacher. Since 2007, he is also an adjunct professor in the University of Guelph, Canada. His current research interests are image processing (including optical image processing and video), biomedical and bioinformatics, modern signal processing technology.

Hui Xie received her Bachelor's degree at the College of Physics, Hunan University of Science and Technology, Hunan, China, in 2010. Now, she is pursuing her Master's degree at the College of Communication Engineering, Chongqing University, Chongqing, China. Her research interests are 3D TV and image processing.

Guoqin Tai is currently pursuing his Master's degree at the College of Communication Engineering, Chongqing University, Chongqing, China. He received the Bachelor's degree at the College of Information Engineering, Qingdao Agriculture University, Shandong, China, in 2009. His research interests include 3D TV, virtual reality and computer vision.

Weimin Tan is currently pursuing his Master's degree at the College of Communication Engineering, Chongqing University, Chongqing, China. He received the Bachelor's degree at the College of Physic and Electronic, Hunan Institute of Science and Technology, Hunan, China, in 2008. His research interests include 3D TV, image processing.

Junling Liu is currently pursuing his Bachelor's degree at the College of Communication Engineering, Chongqing University, Chongqing, China. His research interests include 3D TV, digital signal processing.

Xiaoyan Xu received the Master's degree and Ph.D. degree in Computer Science from Chongqing University, Chongqing, China, in 2007 and 2011, respectively. Her current research interests are 3D TV, image processing, and video post-processing.

Chaibou Kadri received his Bachelor's degree in Electrical/Electronic engineering in 2001 from the Federal University of Technology Bauchi, Nigeria; his MS degree in communication and information system in 2009, from Chongqing University, China. He is presently with Chongqing University, pursuing his Ph.D. degree in circuits and systems. His research interests include signal processing, intelligent systems, and soft compting(machine learning).

Naana Abakah received the Master's degree in Electronics and Communication Engineering from Chongqing University, Chongqing, China, in 2011. She got her Bachelor's degree in Computer Science (Second Class Upper Divison) from Kwame Nkrumah University, Kumasi, Ghana, in 2008. Her current research interests are image processing, and Video streaming applications.
Table 1. Suggested parameter values that may cause less visual
fatigue for the viewer

          Parameter                Parameter Range      Suggested value

virtual view number n                   [-4,4]                 1
scale factor r                   [0, [W.sub.i] x 80%]   [W.sub.i] x 30%
depth value of ZPS [D.sub.zps]         [0, 255]               130

Table 2. Parameter settings for generating stereoscopic
images with different perceived depths

                                        the depth
                                      value of zero    resolution of
image     virtual images    scale     parallax plane   one image in
number       number n      factor r    [D.sub.zps]     a stereo pair

1               4            250           150            576x352
2               1            250           150            576x352
3               2            140            0             960x540
4               5            140            0             960x540
5               2            220           140            960x540
6               5            140           140            960x540
7               3            140           160            960x540
8               4            220           160            960x540
9               2            220            0             960x540
10              3            220           110            960x540
11              3            220           110            576x352
12              1            220           110            576x352
13              4            220           110            576x352
14              3            150           120            576x352
15              2            150           160            576x352
16              1             80           150            960x540
17              4             80           170            960x540
18              3            300           170           1024x768
19              4            300           170           1024x768
20              2            300           170           1024x768
COPYRIGHT 2012 KSII, the Korean Society for Internet Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:depth-image-based rendering
Author:Liu, Ran; Tan, Yingchun; Tian, Fengchun; Xie, Hui; Tai, Guoqin; Tan, Weimin; Liu, Junling; Xu, Xiaoy
Publication:KSII Transactions on Internet and Information Systems
Article Type:Report
Date:Apr 1, 2012
Words:5361
Previous Article:MediaCloud: a new paradigm of multimedia computing.
Next Article:A new connected coherence tree algorithm for image segmentation.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters