Printer Friendly

Asymmetric Coding of Stereoscopic 3D Images with Perceptual Quality Control.


In recent years, the provided digital multimedia content is very rich and the multimedia systems have evolved exponentially. These aimed to increase the quality of experience (QoE) of the users by offering them a realistic feeling of immersion. The most obvious example is the 3D visual media applications that provide the illusion of depth perception of the scene.

Most of the 3D content is captured and prepared in stereoscopic format [1]. Stereoscopic 3D (S-3D) consists of capturing two images from two slightly different positions and provides each of them to each viewer's eye. Thus, stereoscopic content will require twice the amount of data compared to monoscopic one. This increase is an important issue having attracted a significant amount of research efforts. The development of efficient compression techniques is of paramount importance in the storage, media transmission and consumption of such S-3D content.

The easy and obvious way to stereoscopic images coding is to encode each view of the stereo pair independently, using any still image compression standard, such as JPEG [2] or JPEG2000 [3]. This approach is usually referred as simulcast coding method and offers some advantages such as low complexity, an easy deployment and backward compatibility with 2D systems. However, since the redundancy between views (i.e., inter-view redundancy) is not exploited in this approach, the coding performance is clearly suboptimal. Indeed, the views of a stereo pair represent the same scene captured from different viewpoints and consequently contain a large amount of inter-view statistical dependencies. Therefore, stereoscopic images coding can be performed more efficiently by exploiting such interview correlations [4], [21]. This is usually achieved by predicting one view from the other through the so-called disparity estimation and compensation mechanism. More specifically, one view of the stereo pair is selected as the reference image, and the other view is considered as the target one. Thereafter, the disparity map is calculated to be used in predicting the target image from the reference one. Next, the difference between the predicted image and the original target image generates the so-called residual image, which is transformed, quantized, encoded and finally transmitted with the disparity map.

A considerable research efforts have been dedicated to enhance the different steps mentioned above [5]. Some of them tackled the transformation step when coding the reference and residual images [6-10], for instance, some works showed that the residual image has specific characteristics to be exploited by transformation step [6,7]. Other contributions proposed to apply different types of transformation to the occluded and non-occluded blocks [10]. On the other hand, other works aimed at enhancing the disparity estimation and compensation [11-14]. To estimate the disparity between the reference and target images, in [11], the authors proposed a bandelet-based approach using geometrical properties of both images. In [12], an overlapped block-based matching method was proposed, including smoothness constraints and using adaptive windows with variable shapes.

Since the human observer is the ultimate receiver, it would be advantageous to integrate the human visual system (HVS) properties to enhance the coding performance. Following this way, some perceptual approaches exploiting the perceptual properties of the HVS have been proposed for stereoscopic images coding. Among them, we can cite the asymmetric coding techniques that are based on the so-called binocular suppression theory [15]. The latter specifies that in the case where the eyes are provided with two views having a different quality, the fused 3D image will be dominated by the higher quality view [16]. Based on this theory, in order to reduce the required bitrate for stereoscopic images delivery, one image of the stereo pair can be encoded at high quality level, while the other image is encoded at a slightly lower quality, without noticeable visual quality losses.

Many asymmetric coding methods have been proposed in the literature [15-27], the main difference between the different methods is the means by which the asymmetry in quality between the two views is achieved. For instance, some methods proposed to encode both views of the stereo pair at unequal quality or bitrates, i.e., at unequal quantization parameter (QP); this method is referred to as quality (PSNR) reduction or asymmetric quantization [16-19]. Implementing asymmetry by quality/PSNR reduction is straightforward and keeps the encoder/decoder unchanged. This is done by coding one view of the stereo pair more heavily than the other, thus one of views shows more compression artifacts. However, the fused perceptual 3D image is dominated by the higher quality view. For instance, in order to decrease the overall bitrate, Saygili et al. [18] proposed to encode one view at sufficiently high quality (PSNR=40 dB), while the quality of the secondary view is reduced but kept above a certain PSNR threshold. They observed that the degradation in 3D perception is unnoticeable and the perceptual quality of 3D image is driven by the high quality view. This can be explained by the fact that the high quality image masks the effect of the lower quality one.

With the same aim of reducing the transmission bandwidth, other asymmetric coding methods proposed to present both views with different spatial resolutions. This approach is usually known as the mixed resolution (MR) method. Therefore, in the MR concept, before the encoding stage, one of the views is low-pass filtered (blurred image), while the other view is kept at its original full resolution (sharp image). Due to the smaller number of pixels to be encoded compared to the full resolution case, this operation leads to reduce the transmission bandwidth. At the decoding stage, the view with lowest resolution is up-sampled to the full resolution. Finally, the perceived 3D fusion of the two images (sharp and blurred images) is relatively close to the sharper image [20-27].

The asymmetric stereoscopic image coding has demonstrated its effectiveness to provide significant bitrate saving without noticeable degradation, up to a certain limit. From another perspective, some works tried to quantify this perceptual limit of asymmetry in the context of stereoscopic image/video coding, and they used the obtained limit of asymmetry to perform effective asymmetric coding in such a way that the viewer cannot perceive any degradation. For instance, in [18], the authors found that the tolerable difference between both views in term of PSNR, is about 9 dB for a parallax barrier display and 7 dB for a full resolution projection display. In the work described in [27], 2 dB of difference between the left and right views has been considered as a maximum tolerance level for which the viewer does not perceive any annoying distortions. Finally, for asymmetric coding by reduction of spatial resolution, the range of downsampling ratios between 1/2 and 3/8 has showed satisfactory 3D viewing experience [22].

Despite these valuable contributions, all of these latter studies described above are based on subjective experiments and depend on the experiment design, consequently, making their generalization difficult. In addition, affecting a fixed value to the limit of asymmetry, does not allow any adaptation to the content.

Consequently, in this paper, we propose a novel method that determines automatically the most suitable gap in quality between the left and the right images in asymmetric stereoscopic images coding, which provides the best 3D viewing experience. To achieve this perceptual distortion control, the relationship between the asymmetric quality and the inter-view distortion between corresponding pixels is expressed as a function relying on the binocular just-noticeable difference (BJND) [29] model.

The rest of this paper is organized as follows. Section 2 describes the proposed approach as well as the exploited BJND model. Experimental results are presented in Section 3. Finally, Section 4 concludes this paper and provides direction for the future works.


As described previously, when the left and right images are presented to the viewer with different qualities, thanks to the binocular suppression phenomenon of the HVS, the perceptual quality in 3D viewing is dominated by the high quality view. Therefore, based on this concept, we can compress one view of the stereo pair more heavily (with a lower bit-budget) than the other view without introducing visible artifacts during 3D viewing, provided that the quality difference between both views does not exceed a given threshold.

In the following, we describe the proposed asymmetric stereoscopic images coding that adjusts automatically and adaptively the quality threshold between both views according the perceptual model, making the quality difference almost transparent to the viewer, thus offering optimal 3D visual experience.

The framework of the proposed method is shown in Fig. 1. Given a stereo pair composed of the left [I.sub.l] and right [I.sub.r] images. In the rest of this paper, we consider [I.sub.l] as the reference image and [I.sub.r] as the target one. In order to achieve bitrate saving, [I.sub.r] is the one that we strongly compress and thus has the lowest quality.

To be able to adjust the asymmetric quality of the stereo pair, i.e., tuning the quality of [I.sub.r] ([Q.sub.r]) relative to that of [I.sub.l] ([Q.sub.l]), it is important to model the relationship between the quality of both views (Q) and their inter-view distortion (InterD). It is assumed that the InterD-Q relationship can be described by a function as follows:

InterD = f(Q) (1)

where Q represents the ratio between the qualities of both views ([Q.sub.r]/[Q.sub.l]), ranging from [0,1]. While InterD represents the mean-square-error distortion between corresponding pixels of both views, which is defined as

[mathematical expression not reproducible] (2)

where n is the number of matched pixels between [I.sub.l] and [I.sub.r]. [I.sub.l] (x,y) denotes the luminance value of the pixel i at position (x,y). di is the disparity value of the pixel i.

Next, in order to define the function f (*), the InterD-Q relationship at different [Q.sub.r]/[Q.sub.l] values is evaluated by experiment way using a set of different stereoscopic images [30]. For each stereo pair, we used ten different [Q.sub.l] values ranging from [10,100] with a step of 10, and varied the Qr values on the basis of [Q.sub.l] as follows

[mathematical expression not reproducible]. (3)

Fig. 2 shows the curve distributions of InterD and Q at different [Q.sub.l] and [Q.sub.r] values for Aloe stereo pair. Based on the obtained results, we can observe that the general shape of the distribution follows an exponential curve. Consequently, the function describing the InterD-Q relationship in (1) can be formulated as

[mathematical expression not reproducible] (4)

where [a.sub.1] and [a.sub.2] are the model parameters determined experimentally. As mentioned previously, the aim is to decrease the quality in one view without introducing visible artifacts in 3D viewing. In other words, decreasing Qr should lead to a change below the binocular visibility threshold (BJND) [29]. In the following, we give a brief overview of the BJND model.

The Just Noticeable Difference (JND) can be defined as the minimum change that could be noticed by a standard viewer. In other words, JND refers to the visibility threshold below which any change cannot be detected by the HVS [28]. In order to build a Binocular-JND (BJND) model, Zhao et al. conducted psychophysical experiments [29]. Based on these experiments, they proposed a BJND model to measure the joint visibility of a pair of distortions in two views. The BJND model determines the minimum distortions in one view that evoke binocularly visible differences, given the background information and the distortions in the corresponding area of the other view.

In their proposed model, they considered two HVS characteristics namely luminance and contrast masking effects, and they modeled them to the case of binocular vision as described in the following.

Given the left and right views, as well as the disparity map of the left image. The BJND of the left view (BJNDl) is defined as:

[mathematical expression not reproducible] (5)

where i and j are the pixel coordinates, dl is the horizontal disparity value at pixel (i, j). The parameter ? controls the influence of the noise in the right image, and it was suggested in [29] that ? = 1.25. It should be noted that BJNDl is dependent on the background luminance bgr, the edge height[ eh.sub.r], and the noise amplitude [n.sub.r] of the corresponding pixel in the right image. Where no noise in the right image, i.e., nr (i-d, j) = 0, the BJNDl is reduced to AC, which is defined as

[A.sub.C] (bg, eh) = [A.sub.limit] (bg) + K(bg) * eh (6)

Thanks to psychophysical experiments, authors defined [A.sub.limit](bg) and K(bg), respectively by

[mathematical expression not reproducible] (7)

K(bg) = [-10.sup.-6] * (0.7 * [bg.sup.2] + 32*bg) + 0.07 (8)

where bg is the mean of the luminance values of a block of 5x5 centered on the corresponding pixel position, and the edge height eh is computed by the 5x5 Sobel operators as follows:

[mathematical expression not reproducible] (9)

[mathematical expression not reproducible] (10)

where the detailed representation of the Sobel operator [G.sub.k](h,v) is as follows

[mathematical expression not reproducible] (11)

To the best of our knowledge, this is the first model to measure the perceptible distortion threshold of stereoscopic images for binocular vision, and we exploited it in the proposed method to ensure reliable 3D QoE.

Thus, the perceptual distortion control is achieved in the proposed method, by restricting the inter-view distortion (i.e., InterD) between views by BJND as follows

InterD < BJND (12)

Therefore, if we substitute InterD in (12) by its definition in (4), we can derive automatically the appropriate value for Qr as follows

[mathematical expression not reproducible] (13)

In this way, to avoid any visible distortion when exploiting the asymmetric coding method, the selected value of [Q.sub.r] should not be below the threshold defined in (13). In other words, according to the [Q.sub.l] value in addition to the BJND which is calculated relative to the content of both views, we can automatically derive the optimal [Q.sub.r] value that achieves significant bit-rate saving while providing optimal quality of experience.


In this section, simulation results are provided to evaluate the performance of the proposed asymmetric stereoscopic images coding method. Evaluations have been carried out on the Middleburry stereo dataset [30]. We used eight stereo image pairs with different resolutions ranging from 450x375 to 1390x1110, named Art, Books, Cloth3, Cones, Midd1, Midd2, Moebius and Teddy. Fig. 3 shows the left image of each stereo pair considered in the performance evaluation.

The proposed method is dependent on [a.sub.1] and [a.sub.2] parameters (see (13)). The latter are determined through a deep experimentation and, accordingly, the optimal results have been obtained when setting [a.sub.1] to 25 and [a.sub.2] to 0.35, given that BJND values do not exceed 20.

The propose method was compared with symmetric stereoscopic images coding method, which is considered as a reference or anchor coding method (denoted as SC). For the latter, both views have been coded in the same way, without favoring one view to the detriment of the other. In addition, three widely used asymmetric stereoscopic images coding methods have also been considered in the comparison. The first one achieves the asymmetry by downsampling one view with a ratio of 1/2 along both coordinate axes (denoted as AC_Down), and the second consists in blurring one view of the stereo pair using Gaussian blur filter (denoted as AC_Blur) and, finally, in the third method, both views are coded at unequal quality, with a fixed quality gap of 10 between them (denoted as AC_FQuality).

For all compared methods, we used JPEG [2] as the coding standard, with ten different quality values ranging from [10,100]. As mentioned previously, the left image is coded as the reference and the right one as the target. The results are presented as a quantitative comparison in terms of average bitrate and its corresponding PSNR measure. The latter is computed as an average of the mean squared error (MSE) of the left and right reconstructed images as follows:

[mathematical expression not reproducible] (14)

The coding performance are illustrated through the rate-distortion (R-D) curves for different images, as shown in Fig. 4. The vertical axis represents the average PSNR and the horizontal axis reports the bitrate required to encode the stereoscopic images. In addition, the average bitrate savings, which are calculated according to the widely used Bjontegaard measurement [31], are reported in Table 1.

As shown in Fig. 4, the SC method is the one required largest bitrate, thus providing a poor performance due to the fact that stereo pair is coded without any type of asymmetry. In contrast, the proposed method achieves similar quality while decreasing the required bitrate. The performance of the other asymmetric coding methods depend on the content of the images, they vary from one stereo pair to another. Overall, the AC_Down method achieves better performance than the two remaining asymmetric coding methods (AC_Blur and AC_Quality), but compared to the proposed method, it is clear that the latter is more efficient and outperforms the AC_Down method. For the AC_Blur method, despite the achieved bitrate saving, it is the one providing worst performance, since the AC_Blur method obtained the lowest PSNR values.

The proposed method provides significant bitrate saving without visual quality reduction of the reconstructed stereo images. This was achieved through the adaptation to the content of stereo pair and also thanks to the consideration of the quality of the other view. Unlike to the AC_Quality method, where the quality gap between view is fixed before-hand, thus not allowing any adaptation.

Bitrate savings and PSNR differences are given in Table 1. From the results of this Table, the bitrate saving achieved by the proposed method ranges from about 8.14% to 16.32% with an average of 13.39% compared with SC method. Also, AC_Down provides bitrate saving ranging from 5.64% to 15.31%, thus showing low performance than the proposed method. Finally, the weakest performance have been obtained by AC_Blur and AC_Quality methods, which achieved bitrate savings of 4.85% and 5.92% on average, respectively. In addition, the PSNR values obtained by the proposed method are much higher than the other methods. Moreover, for AC_Blur and AC_Quality methods in some instances, such as in Cloth3 and Teddy stereo pairs, we can note a PSNR degradation. Finally, these results clearly confirm the previous conclusions that the proposed method provides the best results.

According to the obtained results illustrated in Fig. 4 and Table 1, thanks to the automatic and efficient adjusting of the quality gap between views, the proposed method provides a significant bitrate saving compared to the conventional coding approach without introducing noticeable distortion in 3D viewing.


In this paper, in order to enhance the asymmetric stereoscopic images coding, we proposed a novel model adjusting in automatic and adaptive manner the quality gap between images of the stereo pair. We derived a quality threshold through modeling the relationship between the quality of both views and their inter-view distortion and, also, thanks to the inclusion of HVS-inspired model. Based on this automatic thresholding of quality, the stereo pair is asymmetrically coded, thus reducing the required bitrate without causing visible artifacts in 3D viewing.

Experimental results show the effectiveness of the proposed coding approach compared to the conventional asymmetric coding methods based on a fixed threshold of asymmetry.

In future work, the proposed method can be extended to the cases of stereoscopic/multi-view videos, which constitutes new challenges, such maintaining the constant quality between successive frames.


[1] A. Vetro, A. M. Tourapis, K. Muller, and T. Chen, "3D-TV Content Storage and Transmission," IEEE Trans. Broadcast, vol. 57, no. 2, pp. 384-394, Jan. 2011.

[2] G. K. Wallace, "The JPEG still picture compression standard," IEEE Transactions on Consumer Electronics, vol. 38, no. 1, pp. xviii-xxxiv, Feb. 1992.

[3] T. Acharya and P.-S. Tsai, JPEG2000 Standard for Image Compression, John Wiley & Sons, New Jersey, 2005.

[4] M. E. Lukacs, "Predictive coding of multi-viewpoint image sets," in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP 86), Tokyo, Japan, Apr. 1986, pp. 521-524.

[5] S. Ouddane, SA. Fezza, and K-M. Faraoun, "Stereo Image Coding: State of the Art," in Proc. IEEE International Workshop on Systems Signal Processing and their Applications (WoSSPA 2013), Tipaza, Algeria, May. 2013. pp. 122-126.

[6] M. S. Moellenhoff and M. W. Maier, "Transform coding of stereo image residuals," IEEE Trans. on Image Processing, vol. 7, no. 6, pp. 804-812, Jun. 1998.

[7] W. Hachicha, A. Beghdadi, and F. A Cheikh, "1D directional DCT-based stereo residual compression," in Proc. European signal Processing Conference (EUSIPCO 2013), Marrakech, Morocco, Sep. 2013.

[8] N. V. Boulgouris and M. G. Strintzis, "A family of wavelet-based stereo image coders," IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, no. 10, pp. 898-903, 2002.

[9] J. N. Ellinas, M. S. Sangriotis, "Stereo image compression using wavelet coefficients morphology," Image and Vision Computing, vol. 22, no.4, pp. 281-290, 2004.

[10] T. Frajka and K. Zeger, "Residual image coding for stereo image compression," Optical Engineering, vol. 42, no. 1, pp. 182-189, Jan. 2003.

[11] A. Maalouf and M.-C. Larabi, "Bandelet-Based Stereo Image Coding," in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2010), Dalas Texas, USA, Mar. 2010, pp. 698-701.

[12] W. Woo and A. Ortega, "Overlapped block disparity compensation with adaptive windows for stereo image coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 10, no. 2, pp. 194-200, 2000.

[13] R. Bensalma and M.-C. Larabi, "Optimizing the disparity map by the integration of HVS binocular properties for efficient coding of stereoscopic images," in Proc. IEEE 3DTV-Conference (3DTVCON 210), Tampere, Finland, Jun. 2010.

[14] J. N. Ellinas and M. S. Sangriotis, "Stereo Image Coder Based on the MRF Model for Disparity Compensation," EURASIP Journal on Applied Signal Processing, Article ID 73950, pp. 1-13, 2006.

[15] L. B. Stelmach, W. J. Tam, D. Meegan, and A. Vincent, "Stereo image quality: effects of mixed spatio-temporal resolution," IEEE

Trans. Circuits Syst. Video Technol., vol. 10, no. 2, pp. 188-193, Mar. 2000.

[16] P. Aflaki, M. M. Hannuksela, and M. Gabbouj, "Subjective quality assessment of asymmetric stereoscopic 3-D video," Journal of Signal, Image and Video Processing, vol. 9, no 2, pp. 331-345, Feb. 2013.

[17] P. Seuntiens, L. Meesters, and W. IJsselsteijn, "Perceived quality of compressed stereoscopic images: Effects of symmetric and asymmetric JPEG coding and camera separation," ACM Trans. Appl. Perception, vol. 3, no. 2, pp. 95-109, Apr. 2006.

[18] G. Saygili, C. G. Gurler, and A. M. Tekalp, "Evaluation of Asymmetric Stereo Video Coding and Rate Scaling for Adaptive 3D Video Streaming," IEEE Trans. Broadcast., vol. 57, no. 2, pp. 593-601, Jun. 2011.

[19] Y. Chang, and M. Kim, "Binocular suppression-based stereoscopic video coding by joint rate control with KKT conditions for a hybrid video codec system," IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 1, pp. 99-111, Jan. 2015.

[20] SA. Fezza and M-C. Larabi, "Perceptually Driven Nonuniform Asymmetric Coding of Stereoscopic 3D Video," IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 10, pp. 2231-2245, Oct. 2017.

[21] M. G. Perkins, "Data compression of stereopairs," IEEE Trans. Commun., vol. 40, no. 4, pp. 684-696, Apr. 1992.

[22] P. Aflaki, M. M. Hannuksela, J. Hakkinen, P. Lindroos, and M. Gabbouj, "Impact of downsampling ratio in mixed-resolution stereoscopic video," in Proc. 3DTV-Conference, Tampere, Finland, Jun. 2010.

[23] V. De Silva, H. Kodikara Arachchi, E. Ekmekcioglu, A. Fernando, S. Dogan, A. Kondoz and S. Savas, "Psycho-physical limits of interocular blur suppression and its application to asymmetric stereoscopic video delivery," in Proc. 19th International Packet Video Workshop (PV 2012), Munich, Germany, May. 2012, pp. 184-189.

[24] C. Fehn, P. Kauff, S. Cho, H. Kwon, N. Hur, and J. Kim, "Asymmetric coding of stereoscopic video for transmission over T-DMB," in Proc. 3DTV Conference, Kos Island, Greece, May. 2007.

[25] Y. Chen, S. Liu, Y.-K. Wang, M. M. Hannuksela, H. Li, and M. Gabbouj, "Low-complexity asymmetric multiview video coding," in Proc. International Conference on Multimedia and Expo (ICME 2008), Hannover, Germany, Apr. 2008, pp. 773-776.

[26] H. Brust, A. Smolic, K. Muller, G. Tech,, and T. Wiegand, "Mixed Resolution Coding of Stereoscopic Video for Mobile Devices," in Proc. 3DTV-Conference, Potsdam, Germany, 2009.

[27] F. Shao, G. Jiang, X. Wang, M. Yu, and K. Chen, "Stereoscopic video coding with asymmetric luminance and chrominance qualities," IEEE Trans. Consum. Electron., vol. 54, no. 6, pp. 2460-2468, Nov. 2010.

[28] N. Jayant, J. Johnston, and R. Safranek, Signal compression based on models of human perception, Proc. of the IEEE, vol. 81, no. 10, pp. 1385-1422, Oct. 1993.

[29] Y. Zhao, Z. Chen, C. Zhu, Y.-P. Tan, and L. Yu, "Binocular just-noticeable-difference model for stereoscopic images," IEEE Signal Process. Lett., vol. 18, no. 1, pp. 19-22, Jan. 2011.

H. Hirschmuller and D. Scharstein, "Evaluation of cost functions for stereo matching," in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, MN, June 2007.

[30] G. Bjentegaard, "Calculation of average PSNR differences between RD-curves," in ITU-T SG 16 Q. 6 Video Coding Experts Group (VCEG), Document VCEG-M33, Austin, TX, Apr. 2-4, 2001.

Samira Ouddane (1) and Kamel Mohamed Faraoun (1)

(1) Department of Computer Science, Djillali Liabes University of Sidi Bel Abbes, Sidi Bel Abbes, Algeria

Received 25 Jan. 2018, Revised 24 May 2018, Accepted 14 Jun. 2018, Published 1 July 2018


Samira OUDDANE received the Engineer degree and the M.S. degree in computer science from University of Sciences and Technologies of Oran, Algeria, in 2006 and 2009, respectively. He is currently working toward the Ph.D. degree in computer science with Djillali Liabes University, Sidi-Bel-Abbes, Algeria. Now, she is an Assistant Professor with University of Oran 2, Oran, Algeria. His research interests include 3D image/video processing, image /video coding and medical imaging.

Kamel Mohamed FARAOUN received the M.S. and Ph.D. degrees in computer science and the Habilitation a Diriger des Recherches (HDR) degree from Djillali Liabes University (UDL), Sidi-Bel-Abbes, Algeria, in 2002, 2006, and 2009, respectively. Currently, he is Professor at the Computer Science Department, UDL. His research interests include computer security systems, cryptography, image coding, multimedia communications, cellular automata, evolutionary programming, and information theory. Pr. Faraoun is a member of the Evolutionary Engineering and Distributed Information Systems Laboratory, EEDIS.
TABLE 1. Bitrate savings and averages psnr (db) gains calculated
against symmetric coding method using the bjontegaard measurement. Note
that negative values correspond to bitrate savings compared to
symmetric coding method, whereas for psnr the negative values indicate
a reduction in psnr.

Images AC_Down AC_Blur AC_FQuality
 Bitrate saving APSNR Bitrate saving APSNR Bitrate saving

Art -13.06% 0.56 -8.40% 0.61 -6.16%
Books -11.20% 0.94 -3.32% 0.05 -2.66%
Cloth3 -5.64% 0.84 1.85% 0.03 1.20%
Cones -15.31% 1.09 -12.42% 0.30 -12.49%
Midd1 -13.35% 1.03 0.97% 0.14 -6.61%
Midd2 -12.33% 1.12 -2.53% 0.43 -12.01%
Moebius -10.19% 1.15 -12.04% 0.76 -1.30%
Teddy -8.02% 0.92 -2.93% -0.51 -7.35%
Average -11.13% 0.95 -4.85% 0.22 -5.92%

Images AC_FQuality Proposed
 APSNR Bitrate saving APSNR

Art 0.74 -14.66% 1.07
Books 0.12 -15.32% 1.23
Cloth3 -0.77 -8.88% 0.98
Cones 0.89 -15.90% 1.55
Midd1 0.20 -15.25% 1.13
Midd2 1.56 -12.65% 1.51
Moebius 0.35 -16.32% 1.35
Teddy 0.87 -8.14% 0.92
Average 0.49 -13.39% 1.21
COPYRIGHT 2018 University of Bahrain, Deanship of Graduate Studies and Scientific Research
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2021 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ouddane, Samira; Faraoun, Kamel Mohamed
Publication:International Journal of Computing and Digital Systems
Article Type:Report
Date:Jul 1, 2018
Previous Article:Utilizing Facebook in EFL Writing Classrooms in Oman.
Next Article:Information Protection - A Comparative Analysis of Popular Models.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |