# Fast linearized Bregman method for compressed sensing.

1. IntroductionThe idea of compressed sensing (CS) [1-5] goes against conventional wisdoms in data acquisition. CS asserts that one can recover signal which is sparse or compressible on certain basis from fewer non-adaptive, linear measurement than the traditional Nyquist method does, thus CS has the superiority in reducing computational and transmission cost, and has become an attractive method for signal processing. Since the development of the paradigm of CS, the basis pursuit (BP) problem [6] has become a topic of great interest.

In CS, let x [member of] [[??].sup.N] be a vector signal, and it can be represented with an orthogonal basis [PSI] = {[[phi].sub.i]|i = 1,2, ..., N}; that is,

x = [N.summation over (i=l)][[theta].sub.i][[phi].sub.i] = [PSI][THETA] (1)

where [[theta].sub.i] = <x, [[phi].sub.i]) denotes the sparse coefficients. The signal x has a K-sparse representation if [[parallel][THETA][parallel].sub.0] = K (where [[parallel][THETA][parallel].sub.0] denotes the number of nonzero entries of [THETA], and K [??] N), and x is a compressible signal if many coefficients in the vector [theta] are small and can be neglected without seriously degrading the signal.

We measure [THETA] directly in order to overcome the problem of that we must know the formation of [PSI], and we can use a matrix A [member of] [[??].sup.MxN] with M < N which incoherent with the sparse basis, and get:

y = A[THETA] (2)

where y [member of] [[??].sup.M] is the measurement, A is the measurement matrix. In this paper, the random Gaussian matrix is chosen as the measurement matrix A.

The aim of a BP problem is to find a sparse vector [THETA] [member of] [[??].sup.N] by solving the constrained minimization problem as follows.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

and then, the signal x can be reconstructed through the orthogonal basis [PSI] by the equation (1).

The Bregman method is a simple but efficient method for solving the BP problem [7]. Moreover, Yin et al. in [8] also noted that the Bregman method is equivalent to the augmented Lagrangian method for solving the BP problem and has identical performance as the linear programming (LP)-based reconstruction method. Therefore, this method has received tremendous attention from researchers and engineers [7-12]. However, since there is generally no explicit expression for the solution of the Bregman method, the linearized Bregman method (LBM) was proposed by Yin et al. in [8]. This method was derived by linearizing the quadratic penalty term on each iteration of the Bregman method, and was further analyzed in [9-10]. In order to overcome the problem of low convergence in LBM and achieve a faster convergence rate, Yin in [10]considered several techniques such as line search, BB step and L-BFGS, to accelerate the linearized Bregman method. A fast linearized Bregman iteration (FLBI) algorithm which added the "kicking" to the LBM was proposed in [11] by kicking the sparse vector [THETA] to the critical point of the stagnation when detect that [THETA] has been staying unchanged for a while. Indeed, this kicking procedure is similar to line search commonly used in optimization problems and modifies the initial algorithm in no way but just accelerates the convergence rate [11]. Besides, an accelerated linearized Bregman method (ALBM) which derived from the Nesterov method and the LBM was proposed in [12] for the same purpose. The main difference between the ALBM and the LBM is that the latter uses the previous iterate and subgradient to compute the new iterate, while the ALBM uses extrapolations new iterate and subgradient that are computed as linear combinations of the two previous iterates and subgradients, respectively [12].

FLBI and ALBM achieve a faster convergence rate, yet both algorithms have some shortcomings. On the one hand, it is very hard to estimate the length of the stagnation and detect that when [THETA] has been staying unchanged for the FLBI algorithm. Moreover, this method modifies the LBM algorithm in no way but just accelerates the convergence rate, thus it only has nearly identical performance as the LBM. On the other hand, since the convergence rate of the ALBM depends on the weighting parameters, it is necessary to choose an appropriate weighting parameters for this method, however, there is not a theoretically guide for choosing these weighting parameters up to now. For these reasons, a fast linearized Bregman method (FLBM), which is based on the fact that the linearized Bregman method (LBM) is equivalent to a gradient descent method applied to a certain formulation, is proposed to solve the BP problem, and achieves a faster convergence rate. The fast method introduced by Beck et al. in [13] has been studied and extended by others for nonsmooth minimization problems and variational inequalities, and not need to choose weighting parameters. Thus, we use this fast method to accelerate the convergence rate of the LBM.

The rest of this paper is organized as follows. In Section 2, we propose the FLBM algorithm to reconstruct the sparse coefficients from the measurement. Section 3 presents extensive numerical results to evaluate the performance of the proposed reconstruction algorithm in comparison with some conventional algorithms. Finally, concluding remarks are provided in Section 4.

2. Proposed Reconstruction Algorithm

The BP problem (3) can be transformed into a linear problem, and then solved by a conventional linear programming method. However, such methods are not tailored for matrices A that are large scale and completely dense, which is the case of the random Gaussian measurement matrix in this paper. For this reason, here we propose the FLBM to solve the BP problem which has identical performance as the LP-based reconstruction method but running dramatically faster.

In this section, we will introduce the Bregman and linearized Bregman method, and then present the fast linearized Bregman method for solving the BP problem. We will also analyze the complexity and the convergence of this algorithm.

2.1 Bregman and Linearized Bregman Methods

The Bregman method was proposed in [7] for solving the following constrained minimization problem.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

where J is a convex function.

The Bregman distance [14] based on the convex function J between points u and v is defined as

[D.sup.p.sub.J] (u,v) = J(u)-J(v) - <u - v,p> (5)

where p [member of] [partial derivative]J(v) is a subgradient in the subdifferential of J at the point v. The Bregman method is defined in terms of the Bregman distance.

To solve the problem (4), the Bregman method was given as follows: Given [u.sub.0] = [p.sub.0] = 0, we define

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

This method can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

Since there is generally no explicit expression for the solution of (7), the linearized Bregman method was proposed in [8]. This method was generated by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

The linearized Bregman algorithms return the solution to the problem (4) by solving the model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

with an appropriate [alpha]. This method is equivalent to the following formula [12]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

For the BP problem where J ([THETA]) = [[parallel][THETA][parallel].sub.1], we can get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

where soft (z,[eta]) is the soft thresholding function

soft (z,[eta]) = sgn(z) [Omicron] max {[absolute value of z] - [eta], 0} (12)

and [eta] > 0 is a known constant and sgn([??]) denotes the sign function. Besides, [v.sub.k] is an auxiliary variable

[v.sub.k] = [p.sub.k] + 1/[alpha] [[THETA].sub.k] (13)

2.2 Fast Linearized Bergman Method

Since the linearized Bregman method is equivalent to a gradient descent method applied to a certain formulation, we can accelerate the linearized Bregman method by techniques used to accelerate the classical gradient descent method. Here we consider the acceleration technique proposed by Beck et al. in [13]. This technique accelerates the classical gradient descent method in the sense that it reduces the iteration complexity significantly without increasing the per-iteration computational effort.

In order to accelerate the convergence rate of the above iteration, the fast method of Beck et al. [13] is applied to solve this problem. In this method, [THETA] is updated as follows.

[[THETA].sub.k+1] = [[bar.[THETA]].sub.k+1] + ([t.sub.k-1]/[t.sub.k+1])([[bar.[THETA].sub.k+1] - [[bar.[THETA].sub.k) (14)

and

[[bar.[THETA].sub.k+1] = [alpha]soft([v.sub.k], 1) (15)

where [t.sub.k+1] = 1 + [square root of 1 + 4[t.sup.2.sub.k]]/2, [t.sub.0] = 1 and [t.sub.k] - 1/[t.sub.k+1][member of][0,1] is the step length.

For the constrained minimization problem (3), FLBM replaces the problem (11) by the following iterative scheme:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

In summary, the FLBM algorithm consists of the following major steps:

1) Initialization: Given starting points [v.sub.0], [[bar.[THETA]].sub.0], [t.sub.0] = 1, and iteration index k = 0;

2) Update [bar.[THETA]] : [[bar.[THETA]].sub.k+1] = [alpha]soft([v.sub.k],1);

3) Update t: [t.sub.k+1] = 1 + [square root of 1 + 4[t.sup.2.sub.k]/2;

4) Update [THETA] : [[THETA].sub.k+1] = [[bar.[THETA].sub.k+1] + ([t.sub.k]-1/[t.sub.k+1])([[bar.[THETA].sub.k+1]- [[bar.[THETA].sub.k);

5) Update v: [v.sub.k+1] = [v.sub.k] - [A.sup.T] (A[[THETA].sub.k+1] - y);

6) The iteration is terminated if the termination condition is satisfied; otherwise, set k = k +1 and return to step 2).

The computational complexity of the FCLALM algorithm on each iteration is dominated by step 2) and step 5), whose total cost is O (MN), which is the same as the LBM. This is because at iteration k, the computational complexity of steps 2) and 5) are both O (MN), the computational complexity of step 3) is O (N), and the computational complexity of step 4) is O (1) only.

This FLBM reduces the iteration complexity significantly without increasing the per-iteration computational effort. Moreover, the FLBM belongs to the classical LBM framework, this fast method modifies the LBM in no way but just accelerates the convergence rate. Hence, the convergence of the FLBM method is ensured due to the convergence of the LBM, which can be found in [12].

Next we will give iteration complexity bounds for both the LBM and the FLBM algorithms Theorem 1([12]) Let the sequence {[[THETA].sub.k]} be generated by the linearized Bregman method, and ([[THETA].sup.*], [v.sup.*]) be the optimal solution of the BP Problem. Then for the function F([THETA], v) = J([THETA]) + 1/2[alpha][parallel][THETA]-[alpha]v[[parallel].sup.2.sub.2], we have

F([[THETA].sup.*, [v.sup.*]) - F ([[THETA].sub.k], [v.sub.k]) = O (1/k) (17)

Theorem 2 Let the sequence {[C]k} be generated by the fast linearized Bregman method, and ([[THETA].sup.*], [v.sup.*]) be the optimal solution of the BP Problem. Then for the function F ([THETA], v) = J ([THETA])+ 1/2[alpha][[parallel][THETA]-[alpha]v[parallel].sup.2.sub.2], we have

F([[THETA].sup.*], [v.sup.*])- F([[THETA].sub.k],[v.sub.k])=O(1/[k.sup.2]) (18)

The proof is very similar to the proof of the Theorem 3.4 in [12], so we omit it.

From the Theorem 1 and 2, we can easily get that the LBM requires O (1/[epsilon]) iterations to obtain an [epsilon] -optimal solution while the FLBM reduces this iteration complexity to O(1/[square root of [epsilon]) on each iteration.

3. Experimental Results and Analysis

In this section, some experimental results are presented to evaluate the performance of the proposed FLBM. Firstly, the test image of size n x n is arranged into a column vector of length N = [n.sup.2]. Then, this vector is divided into n frames of size n and we solve one frame at a time. The sampling rate of the test image is defined as r = m/n, where n is the frame size, and m is the dimension of the measurement of each frame.

The image can be represented with a sparse vector by an orthogonal basis. Many researches have been shown that images in the real world are known to have a sparse representation in the discrete wavelet transform (DWT) domain [15-17], so we choose the DWT basis as the sparse orthogonal basis [PSI] in this paper.

We take stopping criterion as follows:

[[parallel]A[[THETA].sub.k] - y[parallel].sub.2]/[[parallel]y[parallel].sub.2] < [10.sup.-3](19)

Here, we take the three different groups of images for testing: Lena (256 x 256), Peppers (256 x 256), and Man (512 x 512), and compare the FLBM reconstruction algorithm with ALBM, FLBI, LBM, BP algorithm [6] and OMP (Orthogonal Matching Pursuit) algorithm [18]. The proposed FLBM reconstruction algorithm is applied to these test images at the sampling [rate.sup.r] = 0.5. Experimental results on these test images, are shown in Figs. 1, 2 and 3, respectively.

Fig. 1(a) is the original Lena image, Fig. 1(b), (c), (d), (e), (f)and (g) are the reconstructed Lena images obtained by the FLBM, ALBM, FLBI, LBM, BP and OMP, respectively. The images in Figs. 2 and 3 have similar situations. It is clear from Figs. 1-3 that the reconstructed images of our proposed FLBM have more detailed information and are much closer to the original image as compared with the OMP algorithm, and have similar detailed information compared with the ALBM, FLBI, LBM and BP algorithm. Besides, in all these figures, image (b) looks much smoother and clearer than (g). In short, the proposed reconstruction algorithm performs slightly better in human perception of global information, which can enhance the definition of the reconstructed images greatly.

For easy observation, we can magnify the face regions of the reconstruction results with different algorithms for Lena image. The magnified face regions of the reconstruction results with different algorithms are shown in Fig. 4.

It is clear from Fig. 4 that there are reconstruction artifacts and blurring effect in the OMP reconstruction image. The reconstruction results obtained by FLBM has slightly better appearances than ALBM, FLBI, LBM and BP, and has better appearances than OMP. We can see that the reconstruction result of the new algorithm can maintain salient features of the face in the original Lena image and has less noises and artificial effects on the face.

To give the subjective quantitative results from the different reconstruction algorithms for these test images clearly, mean opinion score (MOS) method is used to measure the performance of the these algorithms. MOS method produces the accurate results with small number of scores. It is generated by averaging the results of a set of standard, subjective test and act as an indicator for the perceived image quality. In this experiment, we use ten observers. The ten observers for determining MOS have normal or corrected to normal vision and were non-experts. The MOS of these different reconstruction algorithms for Lena, Peppers and Man images are shown in Table 1.

It is clear from Table 1 that our proposed method has the best performance among all these test images for different reconstruction algorithms except BP algorithm at the sampling rate r = 0.5.

For further comparison, two objective criteria, PSNR in dB and the run time in seconds, are used to measure the performance of the proposed algorithm.

The PSNR in dB of these methods to reconstruct Lena image at different sampling rates are listed in Table 2.

Table 2 gives the quantitative results of the FLBM, ALBM, FLBI, LBM, BP and OMP algorithms. It is clear that all of these methods provide very good performance. It is also obvious that, with increasing the sampling rate, the PSNR becomes higher for all the methods, that is, better quality reconstruction image can be obtained by taking more measurements.

The run time in seconds required by these methods to reconstruct Lena image at different sampling rates is listed in Table 3. In general, with increasing sampling rate, the run time increases for all the methods. However, the FLBM is significantly superior to the LBM, BP and OMP algorithm, and is slightly superior to the ALBM and FLBI algorithm, for the same sampling rate. For example, for the Lena image, at the sampling rate r = 0.5, the LBM, BP and OMP require, respectively, 17.385 seconds, 43.452 seconds and 21.986 seconds, the ALBM and FLBI require, respectively, 6.125 seconds, and 6.987 seconds, for the test, while our FLBM takes only about 5.672 seconds. Therefore, the convergence rate of the FLBM algorithm is the fastest among all the methods in comparison.

To confirm the universality of the proposed FLBM algorithm, we apply it now to reconstruct the three different groups of the test images. The PSNR in dB of the reconstructed images and the required run time in seconds at different sampling rates resulting from the FLBM and ALBM algorithms are listed in Table 4.

Clearly, our proposed method has the best performance and the fastest convergence rate among all these test images with different noise levels.

Since the PSNR performance and the convergence rate depend on the sampling rate, it is necessary to choose an appropriate sampling rate for image reconstruction, and the sampling rate in the following test is set to 0.5. To illustrate the FLBM robust to noise, a zero-mean

Gaussian noise with variance [[sigma].sup.2] is added to the three different groups of the test images. The proposed FLBM reconstruction algorithm is applied to these test images with Gaussian

noise of zero mean and variance [[sigma].sup.2] = 0.001. Experimental results on these noisy images, are shown in Fig. 5, 6 and 7, respectively.

Fig.5(a) is the noisy Lena image with Gaussian noise of zero mean and variance [[sigma].sup.2] = 0.001, Fig. 5 (b), (c), (d), (e), (f) and (g) are the reconstructed Lena images obtained by the FLBM, ALBM, FLBI, LBM, BP and OMP, respectively. The images in Figs. 6 and 7 have similar situations. For easy observation, we can magnify the face regions in Fig. 5 of the reconstruction results with different methods. The reconstruction results obtained by FLBM and BP have slightly better appearances than ALBM, FLBI and LBM, and have better appearances than OMP. It is clear from Figs. 5-7 that the proposed reconstruction algorithm performs slightly better in human perception of global information, which can not only prevent from the emergence of stripes and noises effectively, but also enhance the definition of the reconstructed images greatly.

To confirm the robustness of the proposed FLBM algorithm, the PSNR (dB) is used to measure the performance of the proposed algorithm for the noisy Lena image (256 x 256).

Table 5 gives the PSNR of the reconstructed Lena image at different noise levels resulting from the FLBM, ALBM, FLBI, LBM, BP and OMP algorithms. In general, these algorithms provide very good performance. It is obvious that, with increasing the variance, the PSNR becomes lower for all the methods. Moreover, our proposed FLBM algorithm outperforms BP and OMP algorithm and slightly outperforms ALBM, FLBI and LBM in terms of the PSNR at the same noise levels.

For further comparison, the PSNR in dB of the reconstructed different noisy images at different noise levels resulting from the FLBM and ALBM algorithms are listed in Table 6.

Clearly, our proposed method outperforms ALBM for all different noisy images with different noise levels.

4. Conclusion

In this paper, we have studied the CS-based signal reconstruction problem. An effective and fast algorithm, referred to as FLBM, has been proposed to reconstruct the sparse coefficients from the random measurement, thereby to reconstruct the signal. The LBM requires O (1/[epsilon]) iterations to obtain an [epsilon]-optimal solution while the FLBM reduces this iteration complexity to O (1/ [square root of [epsilon]) and requiring almost the same computational effort on each iteration. The experimental results have demonstrated that the novel FLBM gives a faster convergence rate than some other existing signal reconstruction methods do.

http://dx.doi.org/10.3837/tiis.2013.09.012

Received June 24, 2013; revised August 25, 2013; accepted September 21, 2013; published September 30, 2013

References

[1] D. L. Donoho, "Compressed sensing," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006. Article (CrossRef Link)

[2] D. L. Donoho and Y. Tsaig, "Extensions of compressed sensing," Signal Processing, vol. 86, no.3, pp. 533-548, 2006. http://www.sciencedirect.com/science/article/pii/S0165168405002215

[3] E. J. Candes and M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21-30, 2008. Article (CrossRef Link)

[4] M. A. Davenport, M. F. Duarte, Y. C. Eldar and G. Kutyniok, "Compressed Sensing: Theory and Applications," Cambridge University Press, 2012. http://www.math.tu-berlin.de/fileadmin/i26fg-kutyniok/Kutyniok/Papers/SurveyCS.pdf

[5] J. Romberg, "Imaging via compressive sampling," IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 14-20, 2008. Article (CrossRef Link)

[6] S. S. Chen, D. L. Donoho and M. A. Saunders, "Atomic decomposition by basis pursuit," SIAM Review, vol. 43, no. 1, pp. 129-159, 2001. Article (CrossRef Link)

[7] S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, "An iteration regularization method for total variation-based image restoration," SIAM Journal on Multiscale Modeling and Simulation, vol. 4, pp. 460-489, 2005. Article (CrossRef Link)

[8] W. Yin, M. Burger, D. Goldfarb and J. Darbon, "Bregman iteration algorithm for lx -minimization with applications to compressed sensing," SIAM Journal on Imaging Sciences, vol. 1, no.1, pp. 143-168, 2008. Article (CrossRef Link)

[9] J. F. Cai, S. Osher and Z. W. Shen, "Linearized Bregman iterations for compressed sensing," Mathematics of Computation, vol. 78, no. 267, pp. 1515-1536, 2009. Article (CrossRef Link)

[10] W. Yin, "Analysis and generalizations of the linearized Bregman method," SIAM Journal on Imaging Sciences, vol. 3, no.1 pp. 856-877, 2010. Article (CrossRef Link)

[11] S. Osher, Y. Mao, B. Dong and W. Yin, "Fast linearized Bregman iterations for compressed sensing and sparse denoising," Communications in Mathematical Sciences, vol. 8, pp. 93-111, 2010. Article (CrossRef Link)

[12] B. Huang, S. Q. Ma and D. Goldfarb. "Accelerated linearized Bregman method," Journal of Scientific Computing, vol. 54, no.2-3, pp. 428-453, 2013. Article (CrossRef Link)

[13] A. Beck and M. Teboulle, "A fast iterative shrinkage/thresholding algorithm for linear inverse problems," SIAM Journal on Imaging Sciences, vol. 2, no.1, pp. 183-202, 2009. Article (CrossRef Link)

[14] L. M. Bregman, "The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming," USSR Computational Mathematics and Mathematical Physics, vol. 7, no.4, pp. 200-217, 1967. Article (CrossRef Link)

[15] R.O. Preda and D.N. Vizireanu, "A robust digital watermarking scheme for video copyright protection in the wavelet domain," Measurement, vol. 43, no.10, pp. 1720-1726, 2010. Article (CrossRef Link)

[16] R.O. Preda and D.N. Vizireanu, "Quantisation-based video watermarking in the wavelet domain with spatial and temporal redundancy," International Journal of Electronics, vol. 98, no.3, pp. 393-405, 2011. Article (CrossRef Link)

[17] Z. Z. Yang and Z. Yang, " i 0 -regularisation signal reconstruction based on fast alternating direction method of multipliers for compressed sensing," Journal of Electronics & Information Technology, vol. 35, no.4, pp. 826-831, 2013. http://jeit.ie.ac.cn/CN/abstract/abstract16337.shtml

[18] J. Tropp and A. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," IEEE Transactions on Information Theory, vol. 53, no.12, pp. 4655-4666, 2007. Article (CrossRef Link)

Zhenzhen YANG received her B.Sc. from Linyi University in 2008 and M. Sc. from Nanjing University of Posts and Telecommunications in 2011. Now she is a Ph.D. student in Nanjing University of Posts and Telecommunications. Her research interests include signal processing and compressed sensing. (Email: 2011010101@njupt.edu.cn)

Zhen YANG received his B.Sc. and M.Sc. both from Nanjing University of Posts and Telecommunications in 1983 and 1988 respectively, and received his Ph.D. form Shanghai Jiaotong University in 1999. Now he is a professor in Nanjing University of Posts and Telecommunications. His research interests include speech processing, modern speech communication and network communication. (Email: yangz@njupt.edu.cn)

Zhenzhen Yang (1,2) and Zhen Yang (2)

(1) College of Communication & Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing (210003), China

[e-mail: (2011010101) @njupt.edu.cn]

(2) Key Lab of "Broadband Wireless Communication and Sensor Network Technology" (Nanjing University of Posts and Telecommunications), Ministry of Education

[e-mail: yangz@ njupt.edu.cn]

Corresponding author: Zhenzhen Yang

Table 1. The MOS of different reconstruction algorithms for different test images Algorithms Test Images Lena Peppers Man FLBM 4.11 4.05 4.17 ALBM 3.99 3.91 4.05 FLBI 3.82 3.83 3.98 LBM 3.78 3.82 3.92 BP 4.16 4.09 4.21 OMP 3.45 3.58 3.56 Table 2. The PSNR (dB) for Lena image sampling rate r Algorithms 0.1 0.2 0.3 0.4 0.5 FLBM 31.606 32.183 32.831 33.542 34.453 ALBM 31.584 32.175 32.746 33.489 34.391 FLBI 31.565 32.164 32.791 33.456 34.395 LBM 31.563 32.168 32.789 33.452 34.397 BP 31.174 32.156 32.896 33.610 34.839 OMP 28.939 29.085 30.143 31.108 32.357 Table 3. The run time (sec) for Lena image sampling rate r Algorithms 0.1 0.2 0.3 0.4 0.5 FLBM 0.514 1.247 2.405 3.547 5.672 ALBM 0.723 1.415 2.736 4.059 6.125 FLBI 0.815 1.574 2.945 4.872 6.987 LBM 1.376 2.564 4.963 8.876 17.385 BP 32.031 37.640 35.078 39.217 43.452 OMP 4.187 4.991 7.954 13.502 21.986 Table 4. The PSNR (dB) and run time (sec) of FLBM and ALBM for different test images Test Images Evaluation Algorithms Measures Lena PSNR FLBM ALBM Run time FLBM ALBM Peppers PSNR FLBM ALBM Run time FLBM ALBM Man PSNR FLBM ALBM Run time FLBM ALBM Test Images sampling rate r 0.1 0.2 0.3 0.4 0.5 Lena 31.606 32.183 32.831 33.542 34.453 31.584 32.175 32.746 33.489 34.391 0.514 1.247 2.405 3.547 5.672 0.723 1.415 2.736 4.059 6.125 Peppers 32.801 33.432 33.959 34.676 35.593 32.752 33.336 33.878 34.625 35.511 0.578 1.456 2.483 3.602 5.821 0.842 1.923 2.952 4.293 7.844 Man 34.072 34.629 35.256 35.998 36.942 34.055 34.596 35.224 35.975 36.849 1.637 4.157 8.603 18.351 32.165 2.571 5.236 9.975 20.693 38.645 Table 5. The PSNR (dB) for noisy Lena image Variance [[sigma.sup.2] Algorithms 0.001 0.005 0.01 0.05 0.1 FLBM 32.043 28.037 25.962 19.837 17.706 ALBM 32.032 28.016 25.693 19.715 17.655 FLBI 32.028 27.914 25.537 19.096 17.169 LBM 32.025 27.912 25.536 19.098 17.162 BP 30.886 26.153 23.561 17.672 15.376 OMP 28.025 23.692 20.642 14.876 12.489 Table 6. The PSNR (dB) of FLBM and ALBM for different noisy test images Variance [[sigma].sup.2] Test Images Algorithms 0.001 0.005 0.01 0.05 0.1 Lena FLBM 32.043 28.037 25.962 19.837 17.706 ALBM 32.032 28.016 25.693 19.715 17.655 Peppers FLBM 32.834 28.523 26.022 20.089 17.964 ALBM 32.758 28.316 25.745 19.821 17.680 Man FLBM 33.823 29.764 27.025 20.137 18.652 ALBM 33.326 29.325 26.736 19.956 18.461

Printer friendly Cite/link Email Feedback | |

Author: | Yang, Zhenzhen; Yang, Zhen |
---|---|

Publication: | KSII Transactions on Internet and Information Systems |

Article Type: | Report |

Date: | Sep 1, 2013 |

Words: | 4759 |

Previous Article: | Capacity aware scalable video coding in P2P on demand streaming systems. |

Next Article: | Geometric image compensation method for a portable projector based on prewarping using 2D homography. |

Topics: |