# Optimization-based Image Watermarking Algorithm Using a Maximum-Likelihood Decoding Scheme in the Complex Wavelet Domain.

1. IntroductionDigital multimedia data, such as audio, image, and video data, can be easily accessed and distributed over the Internet due to the rapid development of information technology. Therefore, the copyright protection of multimedia products has become increasingly important. Digital watermarking technology, which is one of the key technologies used in the field of information hiding, has been widely investigated in recent years with regard to copyright protection, image authentication, and fingerprinting [1-3].

Robustness, invisibility, and capacity are indispensable for a watermarking system; however, they are contrasting requirements for a watermarking method. Consequently, any watermarking algorithm should ideally provide good balance among the three aspects. Robustness, imperceptibility, and capacity influence one another and must frequently be addressed together [4]. For example, the robustness of watermarking inevitably decreases as imperceptibility increases.

In general, watermarking methods can be categorized in different ways. For example, watermarking methods can be categorized into spatial-domain [5] and frequency-domain methods based on the embedding space of the watermark [6-10]. Watermark hiding methods can also be classified into additive [11], multiplicative [12-13], and quantization-based [14-15] methods. In terms of watermark decoding, watermarking methods can be categorized into blind [12] and non-blind [16]. As mentioned in the literature [7-8,17], frequency-domain watermarking algorithms have been widely developed in most current watermarking methods because these algorithms are robust, invisible, and stable, particularly for wavelet-based image watermarking. Moreover, multiplicative watermarking methods are more robust and provide higher watermark imperceptibility than additive watermarking methods. Accordingly, a frequency domain-based watermarking algorithm that uses multiplicative methods is considered in this study.

Wavelet transform (WT) can accurately represent 1D signal discontinuities due to its approximate compact support and alternating positive and negative fluctuations in the time domain. However, wavelets fail to represent singularities when dimension increases because WT lacks translation invariance. To overcome the limitation of WT, many multiscale geometric analysis methods that can capture intrinsic geometric structures in natural images, such as smooth curves and contours, have been utilized. Examples include ridgelets [18], wave atoms [19], contourlets [16, 20], and dual-tree complex WT (DT-CWT) [21].

In this work, we apply DT-CWT to embed watermarks into images. In general, DT-CWT can be regarded as an overcomplete transform that creates redundant coefficients that can be used to embed watermarks [22]. As stated in [22], shift invariance is one of the key features of complex wavelets. A watermark can be produced by applying this property.This watermark can be decoded further even after the watermarked image has undergone extensive geometric distortions, such as scaling and rotation. DT-CWT achieves good approximate shift invariance and directional selectivity in image processing when dealing with high-dimensional signals.

Motivated by [17], we propose a multi-objective optimization-based watermarking method by using DT-CWT in this study. First, we segment an original image into small blocks and use high-entropy blocks as embedding space. Second, in accordance with the value of the watermark bits, we embed the watermark into the real parts of the complex wavelet coefficients based on a multiplicative mechanism by using a watermark strength factor. A maximum likelihood (ML) detector is used for watermark extraction. Finally, the experimental results demonstrate the invisibility of the proposed watermarking method and its higher robustness to resist attacks compared with other watermarking methods based on simulations on test images.

A substantial difference exists between the proposed method and the method presented in [17]. Their difference consists of two aspects. On the one hand, the watermark embedding space of the two methods differs. Literature [17] embeds watermark information into the wavelet coefficients of the host image, whereas the proposed method embeds the watermark information into the complex wavelet coefficients of the host image. On the other hand, the current study adopts a complex wavelet structural similarity index (CW-SSIM) in the design of the multi-objective optimization method to describe image visibility when finding the optimal value of the watermark strength factor. The CW-SSIM metric is built based on local phase measurements in complex WT domain. Thus, this evaluation measurement can provide a good approximation of perceptual image quality. Meanwhile, the SSIM metric is highly sensitive to the translation, scaling, and rotation of images [17].

The contributions of the proposed method is summarized as follows. 1) The proposed modified watermarking approach embeds watermark data into a high-entropy region by using DT-CWT. The robustness of the watermark against geometric distortion attacks is improved by using this strategy. 2) The multi-objective function optimization model is built based on an error probability and image quality metric. In particular, an objective function, i.e., CW-SSIM, is utilized to describe the visual quality of an image. This evaluation measurement can provide a good approximation of perceptual image quality because it utilizes local phase information. Therefore, the trade-off between the imperceptibility and robustness of watermarking can be achieved by using the error probability and the CW-SSIM index as the objective function.

The remainder of this paper is organized as follows. Section 2 presents the proposed watermarking method that consists of watermark embedding and decoding. Section 3 describes the multi-objective optimization strategy. Section 4 provides the experimental results and discussions about the performance of the proposed watermarking method. Finally, Section 5 concludes this study.

2. Watermark Embedding and Detection

In accordance with the human visual system (HVS), strong edges are typically observed in high-entropy blocks, but the human eye is less sensitive to such blocks [16]. Inspired by the entropy masking model [23], we select high-entropy image blocks as watermark embedding space in this study. Our proposed watermarking method foucus on two stages: embedding and detection. Figure 1 shows the flowchart of the proposed watermarking method, which is performed through the following steps.

2.1 Watermark Embedding

Fig. 1(a) presents the block diagram of the embedding method. The watermark embedding process consists of the following steps.

Step 1: The host image is segmented into L x L blocks, N high-entropy blocks are selected, and the selection threshold is set to the average entropy of all the image blocks. The entropy can explicitly be expressed as

(H) = -[n.summation over (i=1)] [p.sub.i] log [p.sub.i], (1)

where [p.sub.i] indicates the probability that grayscale i appears in the image, and [[summation].sup.n.sub.i=1] [p.sub.i] = 1. n denotes the number of different pixel values. Entropy measures the amount of information in a signal. When the entropy of the host image is high, considerable watermark information can be embedded into the original image. Thus, entropy is applied to improve the robustness of watermarking.

Step 2: DT-CWT is adopted to decompose each selected image block, and a watermark bit of "0" or "1" is embedded into each block by scaling the real parts of the DT-CWT coefficients through the following approach:

[??] = [lambda] x xx, for embedding 1, (2)

[??] = [1/[lambda]] x x, for embedding 0, (3)

where [lambda] is the strength factor and is higher than 1.0. Its optimal value is discussed in Section 3. x represents the real parts of the DT-CWT coefficients, whereas [??] denotes the watermarked coefficients.

Step 3: Step 2 is repeated for each image block.

Step 4: Inverse DT-CWT is applied to the watermarked image and combined with the non-watermarked image to obtain the entire watermarked image.

2.2 Watermark Detection

Fig. 1(b) shows the flowchart of watermark decoding. Two types of side information are included in watermark detection and are expressed as follows: 1) the positions of the image block that is qualified to embed the watermark based on their entropy level and 2) the watermark strength factor. Fig. 2 shows the statistical distribution for different blocks of the Barbara image. A Gaussian distribution function can accurately model the distribution of the real parts of the complex wavelet coefficients. A decoder in a watermarking system extracts the hidden binary sequence from a set of observed complex wavelet coefficients. In this work, the bits of the binary sequence are assumed to be equally probable, and the real parts of the DT-CWT coefficients with a low frequency are assumed to be independent and identically distributed (i.i.d.) via the Gaussian distribution.

From [17], the effect of attacks is simply modeled as additive white Gaussian noise (AWGN) at the receiver. Given that the DT-CWT coefficients and noise are independent of each other and are independent and identically Gaussian distributed, the distribution of y for "1" or "0" embedding can be expressed as

[y.sub.i\1] = [lambda]*[x.sub.i] + [n.sub.i] [??] [y.sub.i\1] [member of] N([lambda][mu], [[sigma].sup.2.sub.y\1]), (4)

where [[sigma].sup.2.sub.y\1] = [[lambda].sup.2][[sigma].sup.2] + [[sigma].sup.2.sub.n], [[sigma].sup.2.sub.y\0] = [[lambda].sup.-2][[sigma].sup.2] + [[sigma].sup.2.sub.n], and [[sigma].sup.2.sub.n] denotes the variance of noise [17]. [x.sub.i] represents the DT-CWT coefficients with a low-frequency sub-band used for watermark embedding with mean [micro] and variance [[sigma].sub.2].

The DT-CWT coefficients are assumed to be i.i.d. in this study because they are decimated in each decomposition scale. Therefore, the distribution of the DT-CWT coefficients in each block with N coefficients [y.sub.1], [y.sub.2], [y.sub.N] for "1" embedding is [17]

[mathematical expression not reproducible] (6)

Similarly, for "0" embedding, we have

[mathematical expression not reproducible] (7)

On the basis of the ML decision criterion [24], watermark detection can be presented as follows:

f ([y.sub.1], [y.sub.2], ***, [y.sub.N] |1) > f ([y.sub.1], [y.sub.2], ***, [y.sub.N] |0), extracted [??] = 1, (8)

f ([y.sub.1], [y.sub.2], ***, [y.sub.N] |1) > f ([y.sub.1], [y.sub.2], ***, [y.sub.N] |0), extracted [??] = 0. (9)

Thus, by substituting (6) and (7) into (8) and (9), we derive

[mathematical expression not reproducible] (10)

[mathematical expression not reproducible] (11)

We consider the logarithm of the two sides as follows:

[mathematical expression not reproducible] (12)

[mathematical expression not reproducible] (13)

where the watermark detection threshold [tau] can be expressed as

[tau] = 2N ln([[[sigma].sub.y|1]/[[sigma].sub.y|0]])-N[[lambda].sup.2]([[[lambda].sup.-2]/[[sigma].sup.2.sub.y|0]]-[[[lambda].sup.2]/[[sigma].sup.2.sub.y|1]]). (14)

3. Multi-objective Optimization Method

To find the optimal value of the watermark embedding strength factor, a multi-objective optimization method is designed to determine an appropriate value for [lambda] that ensures invisibility with acceptable robustness by using the image quality evaluation measure and the watermark error probability.

3.1 Image Quality Evaluation Function

Literature [25] presented an image quality assessment method based on SSIM. First, the local SSIM index is calculated by comparing the image patches, and then the global SSIM index is computed by applying a sliding window scheme followed by a spatial pooling stage [25]. In particular, the basic principle of the SSIM metric method states that the human eye perception system is highly adapted to extracting image structural information. Hence, the SSIM metric method can provide a good image quality evaluation mechanism. However, the major drawback of the SSIM metric method is that it is highly sensitive to translation, scaling, and rotation.

Literature [26] proposed a CW-SSIM metric method by considering the local phase measurements in DT-CWT domain. As reported in [26], the local phase pattern contains more structural information than the local magnitude. Furthermore, nonstructural image distortions, such as small translations, will lead to a consistent phase shift in a group of adjacent DT-CWT coefficients. Accordingly, CW-SSIM separates the phase pattern from the magnitude distortion measurement and imposes considerable penalties to inconsistent phase distortions in [26].

Given two sets of DT-CWT coefficients, i.e., [C.sub.x] = {[c.sub.x,i]|i = 1,2,..., N} and [C.sub.y] = {[c.sub.y,i]|i = 1,2,..., N}, extracted at the same location in the same DT-CWT sub-bands of the host and distorted images, the local CW-SSIM index [S.sub.L] (*) can be expressed as

[mathematical expression not reproducible] (15)

where C* represents the complex conjugate of the DT-CWT coefficients C, and [C.sub.0] is a positive constant. On the basis of [26], the global CW-SSIM index is calculated as the average of all local CW-SSIM values. The primary advantage of the CW-SSIM index is that it is simultaneously insensitive to contrast change, luminance change, and small geometric distortions, such as translation, scaling, and rotation [26].

Given that the watermark embedding method can be regarded as the scaling process of all the complex wavelet coefficients, the selected image blocks generally reflect most of the energy information of an image. Therefore, the watermark embedding scheme can be approximately represented as [C.sub.y] [approximately equal to] [lambda][C.sub.x]. When [C.sub.y] [approximately equal to] [lambda][C.sub.x] is substituted into (15), the local

CW-SSIM index can be further expressed as

[mathematical expression not reproducible] (16)

On the basis of (16), the global CW-SSIM index between two images can be expressed as the average of all image sub-blocks as follows:

S ([lambda])= [1/K][K.summation over (i=1)][S.sub.L] ([C.sub.x], [C.sub.y]), (17)

where K represents the number of image blocks.

From the preceding analysis, the image distortion function is defined as [F.sub.CW-SSIM] ([lambda]) = 1 - S([lambda]), which is the first objective function for exhibiting the effect of [lambda] on the image distortion. Fig. 3(a) shows the relationship between the image distortion function and the embedding strength factor. As shown in the figure, the value of the objective function increases with an increase of the embedding strength factor intensity. Therefore, the distortion degree of an image increases with an increase of the embedding strength factor, which leads to considerable distortions of the watermarked image. Accordingly, the second objective function, which is called the error probability function, should be defined when controlling watermark robustness. On the basis of the two objective functions, the multi-objective optimization model can be defined to achieve balance between visual perception quality and robustness.

3.2 Error Probability of Watermark

In this work, we discuss the watermark error probability under the presence of AWGN as follows. The error occurs when watermark bit "1" is embedded into a host signal while "0" is decoded at the detection end, and vice versa. The watermark error probability contains two errors.

On the basis of (12), the error probability of the embedding watermark information "1" is

[mathematical expression not reproducible], (18)

where [mathematical expression not reproducible] and

[[omega].sub.2] = -2[lambda]([[[lambda].sup.-1]/[[sigma].sup.2.sub.y|0]]-[[lambda]/[[sigma].sup.2.sub.y|1]]).

From the results of [17], f\1 can be written as

[mathematical expression not reproducible] (19)

where [gamma]([N/2],[x/2])denotes the gamma function, [[kappa].sub.1] = [[omega].sub.1][[sigma].sup.2.sub.y|1], [[kappa].sub.2] = 2[lambda][micro][[omega].sub.1] +[[omega].sub.2], and

[??] = [tau] + [[omega].sub.1]N[[lambda].sup.2][[mu].sup.2].

Furthermore, the error probability of the embedding watermark information "0" is

[mathematical expression not reproducible]. (20)

From [17], [f.sub.e|0] can be written further as

[mathematical expression not reproducible], (21)

where [c.sub.1] = [[omega].sub.1][[sigma].sup.2.sub.y|0], [[sigma].sup.2.sub.y|0] = [[lambda].sup.-2][[sigma].sup.2] + [[sigma].sup.2.sub.n], [c.sub.2] = 2[[lambda].sup.-1][micro][[omega].sub.1] + [[omega].sub.2], and [??] = [tau] + [[omega].sub.1] N[[lambda].sup.-2] = [[mu].sup.2].

In summary, given that the watermark bits "0" or "1" are embedded into an image with the same probability, the total error probability can be written as (22) based on (19) and (21):

[F.sub.e] ([lambda]) = 0.5[f.sub.e|1] + 0.5[f.sub.e|0]. (22)

Fig. 3(b) illustrates the objective function [F.sub.e] ([lambda]) with different embedding strength factors for the Lena image with dimension 512 x 512. In this figure, [F.sub.e] ([lambda]) monotonically decreaes with an increase in the embedding strength factor.

This study aims to determine the optimal value of the watermark strength factor by minimizing the two objective functions. However, the two objective functions cannot be estimated because they have different properties. Therefore, we translate the two objective functions into a multi-objective optimization problem. We adopt the method presented in [27] to address this problem. In this method, a set of design goals and a set of weights are considered to optimize a set of objective functions with several trade-offs. Then, the multi-objective optimization problem can be expressed as

[mathematical expression not reproducible], (23)

where [mathematical expression not reproducible] denotes the weight vector, [phi](x) = [[[phi].sub.1](x),[[phi].sub.2](x), ***, [[phi].sub.n](x)] denotes a set of objective functions, and [phi]* = [[[phi]*.sub.1], [[phi]*.sub.2], ***, [[phi]*.sub.n]] represents a set of design goals. [OMEGA]

represents the feasible region in parameter space x, and [eta] denotes an auxiliary variable. [??] is the weight vector that enables the designer to select trade-offs among the objective functions.

In this experiment, we set the weights to 0.6 and 0.4 for [F.sub.CW-SSIM] ([lambda]) and [F.sub.e] ([lambda]), respectively, and we use the same method to find the optimum parameter for each image.

4. Experimental Results and Analysis

4.1 Imperceptibility of Watermarking

To evaluate the performance of the proposed watermarking method, several experiments are conducted on real images. In this section, eight natural images (Lena, Barbara, Boat, Peppers, Baboon, Couple, Goldhill, and Plane) with dimensions of 512 x 512 are used for testing. Fig. 4 shows the host images and their watermarked versions obtained using the proposed watermarking method with a block size of 32 x 32. A three-level DT-CWT with near-symmetric 13, 19 tap filters and Q-shift 14, 14 tap filters is utilized on each selected block throughout the experiments. In this work, we use 128 high-entropy blocks for the proposed method, and we set the length of the host vectors to 16. Therefore, the total watermark length is 2048 (128 x 16).

As shown in Fig. 4, the top part is the original image, the middle part is the watermarked image, and the bottom part shows the difference between the original and watermarked images. As shown in the figure, watermark imperceptibility is satisfied because the proposed method provides an image-dependent watermark with strong components in the complex part of the image. In this manner, we can insert a strong watermark into the image region where texture information is abundant, and thus, maintains the image's good fidelity. Furthermore, the peak signal-to-noise-ratio (PSNR) and SSIM [25] for the watermarked images are computed to objectively investigate the performance of the proposed watermarking method. The results are provided in Table 1. As shown in the table, the performance of the proposed method is verified.

4.2 Performance under Attacks

In this section, various types of attacks are applied to the watermarked images to evaluate the robustness of the proposed method. Watermark robustness is evaluated based on bit error ratio (BER). To reduce space and report all the used parameters, the robustness of the proposed method under AWGN, JPEG compression, amplitude scaling, rotation, and combined attacks on eight well-known images, namely, Lena, Barbara, Boat, Peppers,Baboon,Couple,Goldhill,and Plane, is investigated. Each original image has dimensions of 512 x 512, and binary watermark data with 2048 bits are embedded into each original image.

1) AWGN attack

Fig. 5 illustrates the robustness against AWGN attack for various images. As shown in the figure, BER is nearly 0 when noise variance is less than 10. BER is nearly 0.1 when noise variance is less than 30. Therefore, the proposed watermarking method exhibits good robustness against AWGN attacks.

2) JPEG compression attack

Fig. 6 presents the BER results for JPEG compression attack. As shown in the figure, the proposed method exhibits strong robustness against JPEG compression attacks, even though the compression quality factor is extremely low. When the strength of the JPEG compression attack is considerable, such as when the quality factor is 5, the BER result is less than 0.2. BER tends toward 0 when the JPEG quality factor is greater than 25.

3) Scaling attack

Fig. 7 presents the BER results for amplitude scaling attack. As shown in the figure, BER is high when the scaling factor is less than 0.4, and the function of amplitude scaling exhibits an up and down pattern with an increase in the scaling factor. This phenomenon will be investigated in our future work. Moreover, the watermark detector recognizes the original size of an image by using side information. Consequently, the attacked image can be resized to its original size and the watermark information can be decoded at the receiver end.

4) Rotation attack

Fig. 8 presents the BER resultsfor various images against rotation attack. As shown in the figure, the range of the rotation angle is [-10[degrees], 10[degrees]]. When the range of the rotation angle is [-2[degrees], 2[degrees]], the range of BER is [0.03,0.07]. The effect of BER is minimal with an angle increase. Therefore, the proposed embedding approach is robust against rotation attacks.

5) Combinational attack

To evaluate the robustness of the proposed method in combined attacks, the BER results for AWGN attack is combined with that for the JPEG compression (quality factor = 20%) attack, as illustrated in Fig. 9. This figure confirms the robustness of the proposed watermarking method against this type of combined attacks.

4.3 Comparison with Other Methods

We compare the proposed watermarking method with other methods, namely, the methods presented in [17], [18], [28], [29], and [30]. The algorithms in [17,18,28,29] are selected because of their similarity to the proposed method. The method in [30] is a logarithmic quantization-based watemarking. To make a fair comparison, we consider the same message length and PSNR values in our experiments as those used in the other studies. Table 2 presents the results.

As shown in Table 2, the experiments are conducted by embedding the same watermark length of 256 bits at a PSNR of approximately 45 dB after watermarking, similar to that in [17], [18], [28], and [29], into the Barbara, Boat, Peppers, Baboon, Couple, and Goldhill images. In this regard, the selected image block size is 8 x 8. As shown in Table 2, the proposed method outperforms the other methods in terms of AWGN, JPEG compression, scaling, and rotation attacks for the Barbara, Boat, Baboon, Couple, and Goldhill images. This result is attributed to DT-CWT exhibiting good approximate shift invariance for geometric transformation. Moreover, the results of the Baboon image are poorer than those of the other images against median filtering attack because the Baboon image has a textured area and can be easily degraded by median filtering.

However, the proposed method is slightly worse than the methods in [17], [18], [28], and [29] in terms of median filtering attack. In addition, the performance of the proposed method is slightly worse than the algorithms in [28] and [29] in terms of resistance to Gaussian low-pass filtering attack.We will investigate these problems in our future work.

Table 3 presents the BER values obtained using the proposed watermarking scheme and the schemes in [17], [18], [28], [29], and [30], in which the watermarked images (Lena, Peppers, Baboon, and Couple) contain a 256-bit message. The result of method [30] is obtained by using the vector logarithmic quantization index modulation approach shown in Table 3. As indicated in Table 3, the proposed method achieves better results than the methods in [17], [18], [28], [29], and [30]. However, our method is more sensitive to Gaussian noise attack for the Baboon and Couple images compared with the method in [28], particularly when the noise variance is greater than 25. This result will be investigated by evaluating the statistical properties of noise in our subsequent work.

Table 4 provides the BER results for JPEG compression attack using the proposed method and the methods in [17,18,28,29,30], where the watermark length is 256 bits. As shown in the table, the proposed method exhibits better results than the methods in [17,18,28,29,30] for JPEG compression attacks.

Table 5 presents the BER results for amplitude scaling attack with a watermark length of 128 bits and the PSNR of the watermarked images being approximately 43 dB. As shown in the table, the results of the proposed method are better than the results of the methods in [17], [18], and [28] for this kind of attack.

From the preceding comparative analysis, the proposed method does not perform well in some attacks, such as median filtering and Gaussian low-pass filtering attacks. In this work, however, a multi-objective optimization model based on CW-SSIM metric and the watermark error probability is utilized to achieve optimal control of the embedding strength factor by using DT-CWT and information entropy. Therefore, the trade-off between the invisibility and robustness of watermarking can be compromise. Furthermore, the optimization model enhances watermark robustness against certain geometric attacks.

5. Conclusion

In this study, we develop an image watermarking algorithm by utilizing DT-CWT and a multiplicative strategy. High-entropy image blocks are used to encode the complex wavelet coefficients during watermark embedding. Then, we detect the watermark information based on the ML decision criterion. Robustness depends on the use of an optimal watermark strength factor by applying multi-objective optimization functions. We evaluate the performance of the proposed method in terms of image quality and robustness. The experimental results show the effectiveness of the proposed watermarking method. Future work will focus on investigating novel image watermarking algorithms, such as deep learning [31-32], compressive sensing, and game theory.

References

[1] D.Bhowmik, C.Abhayaratne, "Quality scalability aware watermarking for visual content," IEEE Transactions on Image Processing, vol.25, no.11, pp.5158-5172, November, 2016. Article (CrossRef Link).

[2] J.X Wang, J.Q Ni, X. Zhang, and Y.Q Shi, "Rate and distortion optimization for reversible data hiding using multiple histogram shifting," IEEE Transactions on Cybernetics, vol.47, no.2, pp.315-326, February, 2017. Article (CrossRef Link).

[3] J.T Zhou, W.W Sun, L. Dong, X.M Liu, Oscar C.Au, and Y.Y Tang, "Secure reversible image data hiding over encrypted domain via key modulation," IEEE Transactions on Circuits and Systems for Video Technology, vol.26, no.3, pp.441-451, March, 2016. Article (CrossRef Link).

[4] G. C. Langelaar, I. Setyawan, R. L. Lagendijk, "Watermarking digital image and video data: A state-of-the-art overview," IEEE Transactions on Signal Processing Magazine,vol.17, no.5, pp.20-46, September, 2000. Article (CrossRef Link).

[5] C.J.Cheng, W.Hwang, H.Zeng,Y.Lin, "A fragile watermarking algorithm for hologram authentication," Journal of Display Technology, vol.10, no.4, pp.263-271, April, 2014. Article (CrossRef Link).

[6] Y.Bian, S.Liang, "Image watermark detection in the wavelet domain using Bessel K densities," IET Image Processing, vol.7, no.4, pp.281-289, June, 2013. Article (CrossRef Link).

[7] I.J. Cox, J. Kilian, T. Leighton, "Secure spread spectrum watermarking for multimedia," IEEE Transactions on Image Processing, vol.6, no.12, pp. 1673-1687, December, 1997. Article (CrossRef Link).

[8] N. Bi, Q.Y Sun, D. Huang, Z.H Yang, and J.W Huang, "Robust image watermarking based on multiband wavelets and empirical mode decomposition," IEEE Transactions on Image Processing, vol.16, no.8, pp.1956-1967, August, 2007. Article (CrossRef Link).

[9] S.Lee, C.D.Yoo, T.Kalker, "Reversible image watermarking based on integer-to-integer wavelet transform," IEEE Transactions on Information Forensics and Security, vol.2, no.3, pp.321-330, September, 2007. Article (CrossRef Link).

[10] C.Zhang, L.L.Cheng, Z.Qiu, L.M.Cheng, "Multipurpose watermarking based on multiscale curvelet transform," IEEE Transactions on Information Forensics and Security, vol.3, no.4, pp.611-619, December, 2008. Article (CrossRef Link).

[11] M.Rahman, M.O.Ahmad, M. N. S. Swamy, "A new statistical detector for DWT-based additive image watermarking using the Gauss-Hermite expansion," IEEE Transactions on Image Processing, vol.18, no.8, pp.1782-1796, August, 2009. Article (CrossRef Link).

[12] H. Sadreazami,M. O. Ahmad, M. N. S. Swamy, "A study of multiplicative watermark detection in the contourlet domain using alpha stable distributions," IEEE Transactions on Image Processing, vol.23, no.10, pp.4348-4360, October, 2014. Article (CrossRef Link).

[13] H. Sadreazami, M. O. Ahmad, M. N. S. Swamy, "A robust multiplicative watermark detector for color images in sparse domain," IEEE Transactions on Circuits and Systems-II :ExPress Briefs, vol.62, no.12, pp.1159-1163, December, 2015. Article (CrossRef Link).

[14] Q.Li, I.J.Cox, "Using perceptual models to improve fidelity and provide resistance to valumetric scaling for quantization index modulation watermarking," IEEE Transactions on Information Forensics and Security, vol.2, no.2, pp.127-139, June, 2007. Article (CrossRef Link).

[15] M. Zareian, H.R. Tohidypour, "A novel gain invariant quantization-based watermarking approach," IEEE Transactions on Information Forensics and Security, vol.9, no. 11, pp.1804-1813, November, 2014. Article (CrossRef Link).

[16] M.A. Akhaee, S.M.E. Sahraeian, F. Marvasti, "Contourlet-based image watermarking using optimum detector in noisy environment," IEEE Transactions on Image Processing, vol.19, no.4, pp.967-980, April, 2010. Article (CrossRef Link).

[17] M.A. Akhaee, S.M.E. Sahraeian, B. Sankur, et al., "Robust scaling-based image watermarking using maximum-likelihood decoder with optimum strength factor," IEEE Transactions on Multimedia, vol.11, no.5, pp.822-833, August, 2009. Article (CrossRef Link).

[18] N.K. Kalantari, S.M. Ahadi, M. Vafadust, "A robust image watermarking in the ridgelet domain using universally optimum decoder," IEEE Transactions on Circuits and Systems for Video Technology, vol.20, no.3, pp.396-406, March, 2010. Article (CrossRef Link).

[19] L. Demanet, L.X Ying, "Wave atoms and time upscaling of wave equations," Numerische Mathematik, vol.113,no.1,pp.1-71,May, 2009. Article (CrossRef Link).

[20] M.N. Do, M. Vetterli, "The contourlet transform: An efficient directional multiresolution image representation," IEEE Transactions Image Processing, vol.14, no.12, pp.2091-2106, December, 2005. Article (CrossRef Link).

[21] I.W Selesnick, R.G. Baraniuk, N.G. Kingsbury, "The dual-tree complex wavelets transform- a coherent framework for multi-scale signal and image processing," IEEE Signal Processing Magazine, vol.22, no.6, pp.123-151, July, 2005. Article (CrossRef Link).

[22] L.E. Coria, M.R. Pickering, P. Nasiopoulos, R.K. Ward, "A video watermarking scheme based on the dual-tree complex wavelet transform," IEEE Transactions on Information Forensics and Security, vol.3, no.3, pp.466-474, September, 2008. Article (CrossRef Link).

[23] A.B. Watson, G.Y. Yang, J.A. Solomon, et al., "Visibility of wavelet quantization noise," IEEE Transactions on Image Processing, vol.6, no.8, pp.1164-1175, August, 1997. Article (CrossRef Link).

[24] T. M. Ng, H. K. Garg, "Maximum-likelihood detection in DWT domain image watermarking using Laplacian modeling," IEEE Signal Processing Letters, vol.12, no.4, pp.345-348, April, 2005. Article (CrossRef Link).

[25] Z. Wang, A. C. Bovik, H. R. Sheikh, et al., "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol.13, no.4, pp.600-612, April, 2004. Article (CrossRef Link).

[26] M. P. Sampat, Z. Wang, S. Gupta, A. C. Bovik, M. K. Markey, "Complex wavelet structural similarity: A new image similarity index," IEEE Transactions on Image Processing, vol.18, no. 11, pp.2385-2401, November, 2009. Article (CrossRef Link).

[27] F. Gembicki, Y. Haimes, "Approach to performance and sensitivity multi-objective optimization: The goal attainment method," IEEE Transactions on Automatic Control, vol.20, no.6, pp.769-771, December,1975. Article (CrossRef Link).

[28] N. Yadav, K. Singh, "Robust image adaptive watermarking using an adjustable dynamic strength factor," Signal, Image and Video Processing, vol.9, no.7, pp. 1531-1542, October, 2015. Article (CrossRef Link).

[29] H. Sadreazami, M. O. Ahmad, M. N. S. Swamy, "Multiplicative watermark decoder in contourlet domain using the normal inverse gaussian distribution," IEEE Transactions on Multimedia, vol.18, no.2, pp.196-207, February, 2016. Article (CrossRef Link).

[30] N.K. Kalantari, S.M. Ahadi, "A logarithmic quantization index modulation for perceptually better data hiding," IEEE Transactions on Image Processing, vol.19, no.6, pp.1504-1518, June, 2010. Article (CrossRef Link).

[31] H. Kandi, D. Mishra, S.R.S. Gorthi, "Exploring the learning capabilities of convolutional neural networks for robust image watermarking," Computer & Security, vol.65, pp.247-268, March,2017. Article (CrossRef Link).

[32] J.S. Zeng, S.Q. Tan, B. Li, J.W. Huang, "Large-scale JPEG image steganalysis using hybrid deep-learning framework," IEEE Transactions on Information Forensics and Security, vol.13, no.5, pp.1200-1214, May, 2018. Article (CrossRef Link).

Jinhua Liu (1,2), Yunbo Rao (3)

(1) School of Mathematics and Computer Science, ShangRao Normal University, Jiangxi Shangrao, 334001, CHINA

(2) School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, CHINA

[e-mail: liujinhua_uestc@126.com]

(3) School of Information and Software Engineering, University of Electronic Science and Technology of China Chengdu, 610054, CHINA

[e-mail: uestc2008@126.com]

(*) Corresponding author: Jinhua Liu

Received May 21, 2018; revised August 6, 2018; revised August 30, 2018; accepted September 22, 2018; published January 31, 2019

Jinhua Liu received the B.S. and M.S. degrees from the Jiangxi Normal University and the University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 2003 and 2008, respectively, both in School of Computer Science and Engineering, and the Ph.D degree from the University of Electronic Science and Technology of China, Chengdu, China, in Dec. 2011. From 2012 to 2016, he was a senior engineer at Sichuan JiuZhou Electric Group Co., Ltd. Since 2015, He is pursuing the postdoctoral in School of Information and Communication Engineering at UESTC. Since 2017, he has been an associate professor at the School of Mathematics and Computer Science, ShangRao Normal University. His research interests include image watermarking, computer vision.

Yunbo Rao received the B.S. and M.S. degrees from the Sichuan Normal University and the University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 2003 and 2006, respectively, both in School of Computer Science and Engineering, and the Ph.D degree from the University of Electronic Science and Technology of China, Chengdu, China, in 2012. He has been as a visiting scholar of Electrical Engineering of the University of Washington from Oct. 2009 to Oct. 2011, Seattle, USA. Since 2012, he has been an associate professor at the School of Information and Software Engineering, University of Electronic Science and Technology of China. His research interests include video enhancement, computer vision, and crowd animation.

http://doi.org/10.3837/tiis.2019.01.025

Table 1. Result of image quality assessment Reference image Distorted image PSNR(dB) SSIM Lena Watermarked Lena 46.4731 0.9981 Barbara Watermarked Barbara 47.1390 0.9992 Boat Watermarked Boat 45.7325 0.9963 Peppers Watermarked Peppers 46.7898 0.9989 Baboon Watermarked Baboon 47.0825 0.9996 Couple Watermarked Couple 46.8528 0.9993 Goldhill Watermarked Goldhill 47.5755 0.9991 Plane Watermarked Plane 44.8134 0.9982 Table 2. BER (%) results of extracted watermark under various attacks AWGN Median Gauss. JPEG. Method Image [[sigma].sub.n] 3 x 3 3 x 3 10% = 20 Akhaee [17] 6.33 7.06 3.16 7.80 Kalantari [18] 5.79 6.35 3.08 6.22 Yadav [28] Barbara 0.00 0.78 0.39 11.72 Sadreazami [29] 5.43 4.35 0.00 37.29 Proposed 4.58 6.81 2.14 3.32 Akhaee [17] 9.54 11.11 5.46 16.39 Kalantari[18] 7.25 14.38 4.79 9.64 Yadav [28] Boat 2.34 1.95 1.17 9.37 Sadreazami[29] 6.16 4.30 0.00 34.71 Proposed 3.89 11.26 2.45 3.60 Akhaee [17] 11.23 2.78 3.26 17.25 Kalantari [18] 10.46 3.44 2.92 8.07 Yadav [28] Peppers 1.56 0.78 1.95 10.93 Sadreazami [29] 7.23 4.69 0.00 35.58 Proposed 3.54 4.09 2.17 3.25 Akhaee [17] 6.07 3.20 0.38 7.35 Kalantari[18] 5.58 20.47 0.16 7.03 Yadav [28] Baboon 1.39 7.81 6.22 4.30 Sadreazami [29] 5.89 3.97 0.00 36.65 Proposed 0.78 21.33 3.99 3.83 Akhaee [17] 10.37 4.28 3.56 15.23 Kalantari [18] 9.65 5.16 1.83 8.50 Yadav [28] Couple 1.63 2.00 1.59 10.52 Sadreazami [29] 6.17 4.45 0.00 35.90 Proposed 1.56 4.70 1.62 4.53 Akhaee [17] 7.93 0.00 0.00 10.68 Kalantari[18] 6.46 13.22 2.04 8.63 Yadav [28] Goldhill 1.65 6.91 5.30 9.48 Sadreazami [29] 6.31 4.25 0.00 36.85 Proposed 1.55 7.06 2.59 4.36 Scaling Rot. Rot. Method Image 0.9 2[degrees] 5[degrees] Akhaee [17] 2.79 1.98 2.36 Kalantari [18] 3.95 22.67 32.04 Yadav [28] Barbara 0.32 3.27 7.35 Sadreazami [29] - 1.38 2.13 Proposed 0.00 1.43 2.26 Akhaee [17] 4.46 0.00 0.78 Kalantari[18] 5.77 24.69 31.90 Yadav [28] Boat 0.16 3.68 6.03 Sadreazami[29] - 1.03 1.95 Proposed 0.00 2.46 3.40 Akhaee [17] 1.27 3.55 4.90 Kalantari [18] 6.80 25.48 35.46 Yadav [28] Peppers 0.24 4.29 8.85 Sadreazami [29] - 1.28 2.49 Proposed 0.00 3.20 4.25 Akhaee [17] 1.63 4.24 5.79 Kalantari[18] 5.90 24.39 33.86 Yadav [28] Baboon 0.78 5.64 9.13 Sadreazami [29] - 1.09 2.27 Proposed 0.00 4.16 4.89 Akhaee [17] 3.19 2.71 4.84 Kalantari [18] 7.05 26.23 35.78 Yadav [28] Couple 0.00 4.67 8.29 Sadreazami [29] - 0.97 1.95 Proposed 0.78 3.31 3.70 Akhaee [17] 2.14 1.16 1.16 Kalantari[18] 6.13 23.90 36.59 Yadav [28] Goldhill 0.00 3.56 6.80 Sadreazami [29] - 1.13 2.29 Proposed 0.00 2.85 3.03 Table 3. BER (%) comparison of recovered watermark for AWGN attack Noise variance Method Image 5 10 15 20 25 30 Akhaee 1.07 2.10 4.78 6.43 8.42 10.03 [17] Kalantari 0.63 1.59 2.64 3.77 5.64 8.36 [18] Yadav [28] 1.15 2.86 3.01 3.83 5.38 7.54 Sadreazami Lena 0.79 2.00 3.64 6.25 7.72 8.10 [29] Kalantari 2.27 8.50 14.27 29.20 40.56 52.08 [30] Proposed 0.00 0.00 0.92 1.44 4.29 7.33 Akhaee 1.53 3.39 8.60 11.23 13.84 17.27 [17] Kalantari 1.02 2.82 6.44 10.46 12.02 15.78 [18] Yadav [28] 0.46 1.73 2.41 2.87 4.16 5.35 Sadreazami Peppers 1.90 4.36 5.52 7.23 8.83 10.19 [29] Kalantari 2.35 9.11 13.69 28.49 39.34 51.72 [30] Proposed 0.00 1.56 2.13 3.54 3.91 4.83 Akhaee [17] 1.13 1.84 3.94 6.07 7.85 10.36 Kalantari 1.04 1.79 3.17 5.58 6.93 9.10 [18] Yadav [28] 0.33 0.59 0.96 1.39 1.30 1.16 Sadreazami Baboon 1.39 2.18 3.91 5.89 8.76 11.20 [29] Kalantari 3.46 8.08 12.35 26.57 37.89 49.64 [30] Proposed 0.00 0.00 0.00 0.78 2.34 3.15 Akhaee [17] 2.10 4.64 8.79 10.37 13.68 18.97 Kalantari 1.24 3.16 6.43 9.65 11.07 14.45 [18] Yadav [28] 0.00 0.57 0.39 1.63 2.42 3.49 Sadreazami Couple 1.58 2.94 4.20 6.17 8.33 10.67 [29] Kalantari 3.06 9.65 15.23 29.39 39.24 51.66 [30] Proposed 0.00 0.00 0.00 1.56 3.15 6.29 Table 4. BER (%) comparison of recovered watermark for JPEG compression attack JPEG compression quality factor (%) Method Image 10 20 30 40 50 Akhaee [17] 13.61 3.19 0.93 0.05 0.00 Kalantari [18] 9.24 1.74 0.78 0.13 0.00 Yadav [28] Lena 18.75 8.98 4.30 0.78 0.00 Sadreazami [29] 36.78 30.35 24.20 22.43 9.39 Kalantari [30] 7.28 0.63 0.26 0.00 0.00 Proposed 4.07 0.36 0.00 0.00 0.00 Akhaee [17] 16.39 3.78 0.73 0.00 0.00 Kalantari [18] 9.64 1.56 0.39 0.00 0.00 Yadav [28] Boat 9.37 6.25 1.72 0.39 0.00 Sadreazami [29] 34.71 29.58 23.14 20.80 8.02 Kalantari [30] 8.39 0.78 0.32 0.00 0.00 Proposed 3.60 0.45 0.28 0.00 0.00 Akhaee [17] 7.80 1.39 0.78 0.00 0.00 Kalantari [18] 8.32 1.25 0.56 0.00 0.00 Yadav [28] 11.72 2.36 0.79 0.00 0.00 Sadreazami [29] Barbara 37.29 32.64 28.10 24.83 9.32 Kalantari [30] 6.84 1.32 0.15 0.00 0.00 Proposed 3.25 0.16 0.00 0.00 0.00 Akhaee [17] 7.39 1.23 0.73 0.00 0.00 Kalantari [18] 6.43 1.17 0.39 0.00 0.00 Yadav [28] 8.56 0.68 0.24 0.00 0.00 Sadreazami [29] Goldhill 38.03 32.94 29.16 25.45 10.24 Kalantari [30] 5.95 0.86 0.13 0.00 0.00 Proposed 2.34 0.00 0.00 0.00 0.00 Table 5. BER (%) comparison of recovered watermark for Scaling attack Scaling Factor Method Image 0.50 0.70 0.90 1.10 1.30 1.50 Akhaee [17] 9.30 8.40 3.20 0.00 0.00 0.00 Kalantari [18] Lena 10.45 9.13 7.24 4.83 1.06 0.32 Yadav [28] 1.00 20.00 10.00 6.00 0.00 0.00 Proposed method 0.00 6.40 0.00 0.00 0.00 0.00 Akhaee [17] 0.00 12.10 1.99 0.00 0.00 0.00 Kalantari [18] Peppers 9.53 8.36 6.96 4.98 0.94 0.26 Yadav [28] 0.00 13.00 6.00 1.00 0.00 0.00 Proposed method 0.00 8.02 1.36 0.00 0.00 0.00 Akhaee [17] 0.00 5.96 1.63 0.00 0.00 0.00 Kalantari [18] 9.12 7.60 5.83 4.35 0.78 0.00 Yadav [28] Baboon 0.00 6.24 0.00 0.90 0.00 0.00 Proposed method 0.00 1.56 0.00 0.78 0.00 0.00 Akhaee [17] 0.00 2.73 3.19 0.00 0.00 0.00 Kalantari[18] 10.78 8.64 6.53 4.90 1.13 0.00 Yadav [28] Plane 0.73 11.25 9.30 2.00 0.00 0.00 Proposed method 0.00 2.34 0.00 0.78 0.00 0.00

Printer friendly Cite/link Email Feedback | |

Author: | Liu, Jinhua; Rao, Yunbo |
---|---|

Publication: | KSII Transactions on Internet and Information Systems |

Article Type: | Report |

Date: | Jan 1, 2019 |

Words: | 7336 |

Previous Article: | An Upper Bound of the Longest Impossible Differentials of Several Block Ciphers. |

Next Article: | Uplinks Analysis and Optimization of Hybrid Vehicular Networks. |

Topics: |