Printer Friendly

Enhancing Matrix Completion Using a Modified Second-Order Total Variation.

1. Introduction

Matrix Completion (MC) refers to the problem of filling in missing entries of an unknown matrix from a limited sampling of its entries. This problem has received widespread attentions of researchers in many real applications; see, e.g., [1-3]. Generally, it is impossible to exactly recover arbitrary matrices from such a problem without some additional information since the problem is extraordinarily ill-posed [4] and hence has infinitely many solutions. However, in many realistic scenarios, very often the matrix we wish to recover is low-rank or can be approximated by a low-rank matrix, as illustrated in Figure 1. Therefore, an immediate idea to model MC is to pursue the low rank of matrices by solving the following optimization problem:

[mathematical expression not reproducible], (1)

where X, M [member of] [R.sup.mxn] and the entries of M in location set [OMEGA] [subset] [m] x [n] ([k] [??] {1, 2, ... , k}) are given while the remaining entries are missing.

Unfortunately, problem (1) is of little practical use since it is NP-hard in general. To circumvent this issue, a nuclear norm minimization method was suggested in [4, 5], which solves

[mathematical expression not reproducible], (2)

where [parallel]X[parallel]* [??] [[summation].sup.min{m,n}.sub.i=1] [[sigma].sub.i](X) and [[sigma].sub.i](X) denotes the ith largest singular value of X. Especially in [4], Candes and Recht have proved theoretically that any low-rank matrices can be exactly recovered by solving problem (2) under some necessary assumptions. Recently, some new and deep results on the above problem (2) have been obtained to tackle the theoretical defects in [4]; see, e.g., [6, 7]. Many algorithms have also been proposed to solve problem (2) and its variants, including the Accelerated Proximal Gradient (APG) algorithms [8, 9], the Iterative Reweighted Least Squares (IRLS) algorithm [10], and the Singular Value Thresholding (SVT) algorithms [11, 12].

Nuclear norm is the tightest convex approximation of rank function, but it is far from the closest one [13]. The relationship between nuclear norm and the rank function of matrices is similar to that between [l.sub.1] norm and [l.sub.0] norm of vectors [4, 14]. To get a much closer approximation to the rank function, many nonconvex functions used to approximate the [l.sub.0] norm, see, e.g., [15-19], have been successfully extended to replace the nuclear norm [20-22]. Two representative ones are the Schatten-q quasi-norm [20, 23, 24] and the truncated nuclear norm [21, 25] which is also called the partial sum of singular values [26]. The induced methods based on the above two approximate functions have been well investigated to deal with the MC problem [20, 21] and obtained much better performance than previous nuclear norm minimization method. Besides the above, there still exist many other methods used to tackle the MC problem; see, e.g., [27, 28] for the matrix factorization based methods and [29, 30] for the greedy methods.

Overall, almost all the existing MC methods are designed to approach the rank function and thus to induce the low rank. To some degree, low rank only characterizes the global prior of a matrix. In some instances, however, besides the low rank the matrices often have some additional structural priors. We still take the matrix (grey image) shown in Figure 1(a) for example. It is rich in smoothness features (priors). In other words, an entry and its neighboring entries in such a matrix often have small difference in their values. When dealing with such matrices, most existing MC methods in fact can not well capture their smoothness priors and hence may result in a poor performance. On the other hand, when the entries of a matrix reach an incredibly high missing ratio or the high-quality recovered matrices are in urgent need, one has no choice but to take their additional priors into consideration since it will become very difficult to exactly/robustly recover any matrices only by using their low rank prior. Therefore, how to mine more available priors of underlying matrices and integrate them into MC becomes a very crucial problem.

In this paper, we propose a new method that combines the low rank and smoothness priors of underlying matrices together to deal with the MC problem. Our method is not the first work on this topic, but by using a modified second-order total variation of matrices it becomes very competitive. In summary, the contributions of this paper are stated as follows:

(i) A modified second-order total variation and nuclear norm of matrix are combined to characterize the smoothness and low-rank priors of underling matrices, respectively, which makes our method much more competitive for MC problem.

(ii) An efficient algorithm is developed to deal with the optimization problem and the extensive experiments testify to the effectiveness of the proposed method over many state-of-the-art MC methods.

The remainder of this paper is organized as follows. In Section 2, we review some MC methods that simultaneously optimize both the low rank and smoothness priors of underlying matrices. Since matrices can be considered as the second-order tensors, this indicates that most tensor based completion methods can also be applied to the MC problem. Two related tensor completion methods are also included and introduced in this section. In Section 3, we present the proposed method and design an efficient algorithm to solve the induced optimization problem. Experimental results are presented in Section 4. Finally, the conclusion and future work are given in Section 5.

2. A Review on MC with Smoothness Priors

The low rank and smoothness priors of underlying matrices have been well studied in MC and visual data processing communities [31, 32], respectively. However, their combined work for MC is rarely reported. To the best of our knowledge, Han et al. [33] gave the first such work for MC and proposed a Linear Total Variation approximation regularized Nuclear Norm (LTVNN) minimization method, which takes the form

[mathematical expression not reproducible], (3)

where 0 [less than or equal to] [gamma] [less than or equal to] 1 is a penalty parameter. In LTVNN method, the smoothness priors of underlying matrices are constrained by a Linear Total Variation Approximation (LTVA), which is defined as

[parallel]X[[parallel].sub.LTVA] [??] [m.summation over (i=1)][n.summation over (i=1)][(([D.sup.v.sub.i,j] (X)).sup.2] + [([D.sup.h.sub.i,j] (X)).sup.2]), (4)

where

[mathematical expression not reproducible]. (5)

It has been shown in [33] that this LTVNN method is superior to many state-of-the-art MC methods, such as previous SVT method [11] and TNNR method [21], that only pursue the low rank. However, LTVNN method may result in staircase effect due to the use of linear total variation. On the other hand, the induced LTVNN algorithm does not solve problem (3) directly, but solves it by dividing problem (3) into two subproblems firstly, then solving these two subproblems independently and outputting the final results based on the solutions to these two subproblems lastly. Obviously, such an algorithm strategy is not consistent with their proposed optimization problem (3).

Recently, such consideration has been extended to deal with the tensor completion (TC) problem. From the perspective of tensor, vectors and matrices can be considered as the first- and second-order tensors, respectively. Therefore, most existing TC methods can still be used to deal with the MC problem. In 2014, Chen et al. [34] proposed a Tucker decomposition based Simultaneous Tensor Decomposition and Completion (STDC) method for TC. This STDC method solves the following optimization problem:

[mathematical expression not reproducible], (6)

where [mathematical expression not reproducible] is the Tucker decomposition of tensor X, [PHI] [??] (U(1) [cross product][U.sup.(2)] [cross product] ... [cross product][U.sup.(N)]) is a unified factor matrix that contains all factor matrices, [cross product] is the Kronecker product, and L is a preset matrix designed by some priors. Similar to the matrix case, the elements of tensor M in location set [OMEGA]([subset] [[I.sub.1]]x[[I.sub.2]]x ... x[[I.sub.N]]) are given while the rest are missing. In problem (6), [[summation].sup.N.sub.i=1] [[alpha].sub.i][parallel][U.sup.(i)][[parallel].sub.*], i.e., the first term of the objective function, characterizes the low rank of tensor X, and its second term enforces the similarity between individual components, which provides some smoothed results for X. Thus, STDC can be considered as a tensor extension of low-rank and smooth matrix completion. Later, based on another PARAFAC decomposition of tensors, Yokota et al. [35] proposed a Smooth PARAFAC tensor Completion (SPC) method that integrates low rank and smoothness priors of tensor by solving

[mathematical expression not reproducible], (7)

where Z [??] [[summation].sup.R.sub.r=1] [g.sub.r] x [u.sup.(1).sub.r] [dot encircle] [u.sup.(2).sub.r] [dot encircle] ... [dot encircle] [u.sup.(N).sub.r] is the PARAFAC decomposition of tensor Z, [dot encircle] is the outer product, [[OMEGA].sup.c] stands for the complement set of [OMEGA], [parallel]*[[parallel].sub.p] is the classical [l.sub.p] norm on vectors, and the matrix [mathematical expression not reproducible] is a smoothness constraint matrix, defined as

[mathematical expression not reproducible]. (8)

SPC method integrates low rank and smoothness priors into a single term (i.e., the second term of the objective function in problem (7)), and it is totally different from the STDC method. Besides, it has been proved numerically in [35] that SPC method performs much better than the SDTC method. More discussion on tensor and TC problems is beyond the scope of this paper, we refer interested readers to [35, 36] and the references within.

3. Proposed New Method

In this section, we propose a new method that combines the low rank and smoothness priors of underlying matrices to deal with the MC problem. In our method, the nuclear norm is used to characterize the low rank of underlying matrices while their smoothness priors are characterized by a modified second-order total variation (MSTV), which is defined as

[parallel]X[[parallel].sub.MSTV] [??] [m.summation over (i=1)][n.summation over (j=1)][(([R.sup.v.sub.i,j] (X)).sup.2] + [([R.sup.h.sub.i,j] (X)).sup.2]), (9)

where

[mathematical expression not reproducible]. (10)

and

[mathematical expression not reproducible]. (11)

Obviously, such a modified second-order total variation is a new metric for smoothness priors of matrices, which is not different from the classical second-order total variation [37], or from previous linear total variation approximation. However, from a functional perspective, it not only inherits the geometrical structure of second-order total variation which avoids the staircase effect caused by linear total variation approximation, but also yields a smooth function that allows more efficient numerical algorithms. Figure 2 shows the geometrical differences between the linear total variation approximation and our modified second-order total variation. Besides, equation (9) can be further written in matrix form as

[[parallel]X[parallel].sub.MSTV] = [[parallel][X.sup.T][R.sub.m][parallel].sup.2.sub.F] + [[parallel]X[R.sub.n][parallel].sup.2.sub.F], (12)

where for v = m, n

[mathematical expression not reproducible]. (13)

Now, the proposed methods that are based on nuclear norm and a modified second-order total variation can be modeled as the following optimization problem:

[mathematical expression not reproducible], (14)

where [lambda] > 0 is a penalty parameter. Although in problem (14) nuclear norm is used to characterize the low rank of matrices, it can be replaced by some other approximate functions of rank function, such as Schatten-q quasi-norm and truncated nuclear norm. On the other hand, both problem (14) and the aforementioned problem (3) share some similarities, but one will see later in the experimental part that the induced algorithm by proposed problem (14) performs much better than that of problem (3).

To solve problem(14), we adopt the Alternating Direction Method of Multipliers (ADMM) [38], which has been widely used to solve many application problems. To use ADMM, we first rewrite problem (14) as

[mathematical expression not reproducible]. (15)

Then the corresponding augmented Lagrangian function of (15) is presented as

[mathematical expression not reproducible], (16)

where Z [member of] [R.sup.mxn] is the dual variable and [rho] > 0 is a penalty parameter associated with the augmentation. According to ADMM, the iterative procedure can be described as follows:

(1) Fixing [Y.sup.(k)] and [Z.sup.(k)], solve for [X.sup.(k+1)] by

[mathematical expression not reproducible], (17)

which is equivalent to

[mathematical expression not reproducible], (18)

where G = [Y.sup.(k)] - [Z.sup.(k)]/[rho]. According to [11], problem (18) has a closed-form solution, which can be obtained by the singular value thresholding operator [D.sup.[rho]](*), i.e.,

[mathematical expression not reproducible], (19)

where G [??] [S.sub.G] * Diag([{[[sigma].sub.i](G)}.sub.1[less than or equal to]i[less than or equal to]m]) * [D.sup.T.sub.G] is the singular value decomposition of G.

(2) Fixing [X.sup.(k+1)] and [Z.sup.(k)], solve for [Y.sup.(k+1)] by

[mathematical expression not reproducible], (20)

which is equivalent to

[mathematical expression not reproducible], (21)

where H [??] [X.sup.(k+1)] + [Z.sup.(k)]/[rho]. By setting the derivative of J(Y) with respect to Y to zero, we have

2[lambda][R.sub.m][R.sup.T.sub.m]Y + Y(2[lambda][R.sub.n][R.sup.T.sub.n] + [rho][I.sub.n]) = [rho]H, (22)

where [I.sub.n] is an n x n identity matrix. Equation (22) is the well known Sylvester equation [39]. In this paper, we resort to the Matlab command lyap to solve it; i.e.,

[??] = lyap (2[lambda][R.sub.m][R.sup.T.sub.m],2[lambda][R.sub.n][R.sup.T.sub.n] + [rho][I.sub.n], - [rho]H). (23)
Algorithm 1: Solving problem (14) via ADMM.

Input: [lambda], [OMEGA], [M.sub.[OMEGA]], [X.sup.(0)] = [Y.sup.(0)] =
[M.sub.[OMEGA]], [Z.sup.(0)] = 0, [rho], [R.sub.m], and [R.sub.n]
Output: [X.sup.#]
 1: Initialize k [left arrow] 1
 2: repeat
 3:   Update [X.sup.(k+1)] by (19);
 4:   Update [??] by (23);
 5:   Update [Y.sup.(k+1)] by (24);
 6:   Update [Z.sup.(k+1)] by (25);
 7:   Update [rho] [left arrow] min{t[rho], [[rho].sub.max]};
 8:   Update k [left arrow] k + 1.
 9: until a certain stopping criterion is satisfied.
 10: return [mathematical expression not reproducible].


After we obtain [??], i.e., the solution to (22), we can hence approximately obtain [Y.sup.(k+1)] by

[mathematical expression not reproducible]. (24)

(3) Update Z(k+1) by

[Z.sup.(k+1)] = [Z.sup.(k)] + [rho]([X.sup.(k+1)] - Y(k+1)). (25)

We summarize the iterative procedure in Algorithm 1.

Note that we do not use a fixed [rho] but adopt an adaptive updating strategy for [rho] (see the 7th step in Algorithm 1). Such a setting is mainly inspired by the recent studies on ADMM [21, 26, 40]. In fact, both the theoretical and applied results have shown that such a setting can help achieve quick convergence for ADMM based algorithms and hence avoid high computation cost. Without loss of generality, we simply set t = 1.05 and [[rho].sub.max] = [10.sup.3] for Algorithm 1. It should also be noted that the performance of the proposed algorithm can be further improved if one can optimize these two parameters.

4. Numerical Experiments

In this section, some experiments are conducted to evaluate the performance of the proposed method. They mainly involve solving the randomly masked image completion problem and the text masked image reconstruction problem. The former problem aims to complete the incomplete images generated by masking their entries randomly, and the latter one aims to reconstruct the images with text. In our experiments, the Lenna image shown in Figure 1(a) and 20 widely used test images shown in Figure 3 are used to generate the desired test matrices. To evaluate the quality of the reconstructed images, Structural SIMilarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) indices are adopted. We refer readers to [41] for SSIM index. As to PSNR index, suppose Mean Squared Error (MSE) is defined as [mathematical expression not reproducible], where [absolute value of [[OMEGA].sup.c]] counts the number of elements in [[OMEGA].sup.c] and M and [X.sup.#] are the original image and the reconstructed image, respectively; then we can calculate the PSNR value by 10 [log.sub.10] ([255.sup.2]/MSE). The missing ratio of an incomplete image is always defined as [absolute value of [[OMEGA].sup.c]]/(m x n). All the experiments are implemented in Matlab and the codes can be downloaded from https://github.com/DongSylan/LIMC.

4.1. Convergence Behavior. In this part, we conduct some experiments to examine the convergence behavior of our Algorithm 1. The desired test matrices are generated by masking the entries in Lenna image randomly with a 60% missing ratio. Before the convergence analysis, we have to determine a proper penalty parameter [lambda] since it plays a vital role in our Algorithm 1. Figure 4 plots the relationship between [lambda] and the PSNR and SSIM results obtained by our Algorithm 1. It is easy to see that [lambda] = [10.sup.-4] is the best selection. In above experiments, we initialize [rho] = [10.sup.-2] and set the stopping criterion as

n (k) [??] [[parallel][X.sup.(k+1)] -[X.sup.(k)][[parallel].sub.F]/ max {[parallel][X.sub.(k)][[parallel].sub.F], 1}] < [10.sup.-4] or k > 500, (26)

where, here and throughout, n(k) is called the Relative Neighboring Iteration Error (RNIE). Similar to RNIE, we define Relative True Iteration Error (RTIE) as t(k) [??] [parallel][X.sup.(k+1)]- M[parallel]F/[[parallel]M[parallel].sub.F]. In what follows, we always set [lambda] = [10.sup.-4] and [rho] = [10.sup.-2] for our Algorithm 1.

Figure 5 plots the convergence results of our Algorithm 1. One can easily see that both the RNIE and RTIE decrease rapidly and consistently. Specifically, in terms of RNIE, when k exceeds 200 it tends to be smaller than [10.sup.-4]. However, under the same condition, RTIE tends to be a constant. These observations to some degree indicate that the stopping criterion (26) is reasonable for our Algorithm 1.

4.2. Randomly Masked Image Completion. To illustrate the effectiveness of our method, we compare our Algorithm 1 with 5 other state-of-the-art algorithms. They are IRLS algorithm [24], PSSV algorithm [26], LTVNN algorithm [33], and the SPC algorithms [35] with TV and QV smoothing, i.e., the SPC-TV and SPC-QV algorithms. In our experiments, we set q = 0.5, [lambda] = [10.sup.-4], and K = 20 for IRLS algorithm, [gamma] = [10.sup.-3] for PSSV algorithm, [gamma] = 0.5 and [lambda] = 1 for LTVNN algorithm, [rho] = [0.01, 0.01] for SPC-TV algorithm, and [rho] = [0.5, 0.5] for SPC-QV algorithm. We also uniformly set their maximum iterations to 500.

We start with completing the Lenna image masked randomly with several different missing ratios [member of] {70, 65, 60, 55, 50%}.The obtained PSNR and SSIM results are presented in Table 1, where the highest PSNR and SSIM values are underlined and marked in bold, while their second highest values are underlined only. One can directly see that, when the missing ratio decreases or in other words the known information increases, all the algorithms tend to perform better. However, our Algorithm 1 performs better than the rest of the algorithms in both PSNR and SSIM perspectives whenever the missing ratio ranges. Specifically, when compared to the IRLS and PSSV algorithms that only pursue the low rank of underlying matrices, our Algorithm 1 improves their PSNR and SSIM performance by more than 5dB and 0.2, respectively. Figure 6 shows the reconstructed images obtained by these algorithms. It is easy to find that our Algorithm 1 not only avoids the staircase effect resulting from LTVNN algorithm, but also leads to a better smoothness performance than SPC-TV and SPC-QV algorithms.

To further confirm the effectiveness of our method, we apply our Algorithm 1, together with other 5 algorithms, to 20 widely used test images (Figure 3). The obtained results are presented in Table 2. An overall impression observed from Table 2 is that our Algorithm 1 achieves the highest SSIM in all cases and the highest PSNR in almost all cases. Although in terms of PSNR our Algorithm 1 performs weaker than SPC-QV algorithm in some cases, but these cases only occupy a small proportion (13/80) of all the cases, and the PSNR difference in these cases is also very small (less than 0.8dB).

4.3. Text Masked Image Reconstruction. Compared to the randomly masked image completion problem, text masked image reconstruction is a relatively hard task since the entries masked by the text are not randomly distributed in the matrix and the text may mask some important texture information. However, this problem can still be transformed into an MC problem by checking the position of the text first and regarding the corresponding entries as missing values.

Figure 7 shows the visual results obtained by abovementioned algorithms on reconstruction of the Lenna image with text. In terms of PSNR and SSIM values, our Algorithm 1 and the SPC-QV algorithm rank first and second, respectively, followed by the rest of the algorithms. In terms of visual quality, the proposed Algorithm 1 well reconstructs the original image and its local feature structure is also well kept without seeing any text traces. In contrast, the images reconstructed by IRLS and PSSV algorithms are covered with the obvious text traces. The LTVNN algorithm suffers from staircase effect again. The image reconstructed by the SPC-QV algorithm is better than that by SPC-TV algorithm, but still has some faint text traces. Based on previous 20 test images, we generate another 20 desired images with text (Figure 8) to compare our Algorithm 1 with 5 other algorithms in dealing with the text masked image reconstruction problem. The obtained PSNR and SSIM results are reported in Table 3, which confirms again the excellent performance of our Algorithm 1.

5. Conclusion and Future Work

This paper proposed a new method that combines the low rank and smoothness priors of matrices to tackle the matrix completion problem. Different from previous LTVNN method, our proposed method characterizes the smoothness of matrices by using a modified second-order total variation, which not only avoids the staircase effect caused by LTVNN method, but also leads to a better performance. Even compared to the recently emerged smooth PARAFAC tensor completion method, our method is still highly competitive in both PSNR and SSIM perspectives. The extensive experiments further confirm the excellent performance of our proposed method. Potential future work includes replacing the nuclear norm used in our method with other (nonconvex) low-rank promoting functions, integrating more available priors of matrices to enhance the performance of existing matrix completion methods, and applying our modified second-order total variation to the tensor completion problems.

https://doi.org/10.1155/2018/2598160

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China under Grants Nos. 61273020 and 61673015.

References

[1] N. Komodakis, "Image Completion Using Global Optimization," in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Volume 1 (CVPR'06), pp. 442-452, New York, NY, USA, June 2006.

[2] Y. Koren, R. Bell, and C. Volinsky, "Matrix factorization techniques for recommender systems," The Computer Journal, vol. 42, no. 8, pp. 30-37, 2009.

[3] H. Ji, C. Liu, Z. Shen, and Y. Xu, "Robust video denoising using low rank matrix completion," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1791-1798, San Francisco, Calif, USA, June 2010.

[4] E. J. Candes and B. Recht, "Exact matrix completion via convex optimization," Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717-772, 2009.

[5] B. Recht, M. Fazel, and P. A. Parrilo, "Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization," SIAM Review, vol. 52, no. 3, pp. 471-501, 2010.

[6] Y. Chen, "Incoherence-optimal matrix completion," Institute of Electrical and Electronics Engineers Transactions on Information Theory, vol. 61, no. 5, pp. 2909-2923, 2015.

[7] G. Liu and P. Li, "Low-rank matrix completion in the presence of high coherence," IEEE Transactions on Signal Processing, vol. 64, no. 21, pp. 5623-5633, 2016.

[8] S. W. Ji and J. P. Ye, "An accelerated gradient method for trace norm minimization," in Proceedings of the 26th Annual International Conference on Machine Learning (ICML '09), pp. 457-464, ACM, June 2009.

[9] K.-C. Toh and S. Yun, "An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems," Pacific Journal of Optimization, vol. 6, no. 3, pp. 615-640, 2010.

[10] M. Fornasier, H. Rauhut, and R. Ward, "Low-rank matrix recovery via iteratively reweighted least squares minimization," SIAM Journal on Optimization, vol. 21, no. 4, pp. 1614-1640, 2011.

[11] J. Cai, E. J. Candes, and Z. Shen, "A singular value thresholding algorithm for matrix completion," SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956-1982, 2010.

[12] J.-F. Cai and S. Osher, "Fast singular value thresholding without singular value decomposition," Methods and Applications of Analysis, vol. 20, no. 4, pp. 335-351, 2013.

[13] C. Xu, Z. C. Lin, and H. B. Zha, "A unified convex surrogate for the Schatten-p norm," in Proceedings of the AAAI Conference on Artificial Intelligence, pp. 926-932, 2017.

[14] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Springer, Berlin, Heidelberg, Germany, 2013.

[15] R. Chartrand, "Exact reconstruction of sparse signals via nonconvex minimization," IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707-710, 2007.

[16] J. Wen, D. Li, and F. Zhu, "Stable recovery of sparse signals via [L.sub.p]-minimization," Applied and Computational Harmonic Analysis, vol. 38, no. 1, pp. 161-176, 2015.

[17] Y. Wang and W. Yin, "Sparse signal reconstruction via iterative support detection," SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 462-491, 2010.

[18] P. Yin, Y. Lou, Q. He, and J. Xin, "Minimization of 11-2 for compressed sensing," SIAM Journal on Scientific Computing, vol. 37, no. 1, pp. A536-A563, 2015.

[19] W. Wang, J. Wang, and Z. Zhang, "Robust Signal Recovery with Highly Coherent Measurement Matrices," IEEE Signal Processing Letters, vol. 24, no. 3, pp. 304-308, 2017.

[20] G. Marjanovic and V. Solo, "On [l.sub.q] optimization and matrix completion," IEEE Transactions on Signal Processing, vol. 60, no. 11, pp. 5714-5724, 2012.

[21] Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, "Fast and accurate matrix completion via truncated nuclear norm regularization," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2117-2130, 2013.

[22] T.-H. Ma, Y. Lou, and T.-Z. Huang, "Truncated [L.sub.1-2] models for sparse recovery and rank minimization," SIAM Journal on Imaging Sciences, vol. 10, no. 3, pp. 1346-1380, 2017.

[23] F. P. Nie, H. Huang, and C. Ding, "Low-rank matrix recovery via efficient Schatten p-norm minimization," in Proceedings of the 26th AAAI Conference on Artificial Intelligence, pp. 655-661, 2012.

[24] M. J. Lai, Y. Y. Xu, and W. T. Yin, "Improved iteratively reweighted least squares for unconstrained smoothed [l.sub.q] minimization," SIAM Journal on Numerical Analysis, vol. 51, no. 2, pp. 927-957, 2013.

[25] F. Cao, J. Chen, H. Ye, J. Zhao, and Z. Zhou, "Recovering low-rank and sparse matrix based on the truncated nuclear norm," Neural Networks, vol. 85, pp. 10-20, 2017.

[26] T.-H. Oh, Y.-W. Tai, J.-C. Bazin, H. Kim, and I. S. Kweon, "Partial summinimization of singular values in robust PCA: Algorithm and applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 4, pp. 744-758, 2016.

[27] R. H. Keshavan, A. Montanari, and S. Oh, "Matrix completion from a few entries," IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2980-2998, 2010.

[28] Z. Wen, W. Yin, and Y. Zhang, "Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm," Mathematical Programming Computation, vol. 4, no. 4, pp. 333-361, 2012.

[29] K. Lee and Y. Bresler, "ADMiRA: atomic decomposition for minimum-rank approximation," IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4402-4416, 2010.

[30] Z. Wang, M.-J. Lai, Z. Lu, W. Fan, H. Davulcu, and J. Ye, "Orthogonal rank-one matrix pursuit for low rank matrix completion," SIAM Journal on Scientific Computing, vol. 37, no. 1, pp. A488-A514, 2015.

[31] M. A. Davenport and J. Romberg, "An overview of low-rank matrix recovery from incomplete observations," IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 4, pp. 608-622, 2016.

[32] P. Rodriguez, "Total variation regularization algorithms for images corrupted with different noise models: a review," Journal of Electrical and Computer Engineering, vol. 2013, Article ID 217021, 18 pages, 2013.

[33] Xu Han, Jiasong Wu, Lu Wang, Yang Chen, Lotf Senhadji I, and Huazhong Shu, "Linear total variation approximate regularized nuclear norm optimization for matrix completion," Abstract and Applied Analysis, vol. 2014, Article ID765782, 8 pages, 2014.

[34] Y.-L. Chen, C.-T. Hsu, and H.-Y. M. Liao, "Simultaneous tensor decomposition and completion using factor priors," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 3, pp. 577-591, 2014.

[35] T. Yokota, Q. Zhao, and A. Cichocki, "Smooth PARAFAC decomposition for tensor completion," IEEE Transactions on Signal Processing, vol. 64, no. 20, pp. 5423-5436, 2016.

[36] T. G. Kolda and B. W. Bader, "Tensor decompositions and applications," SIAM Review, vol. 51, no. 3, pp. 455-500, 2009.

[37] K. Papafitsoros, C. B. Schoenlieb, and B. Sengul, "Combined first and second order total variation in painting using split Bregman," Image Processing On Line, vol. 3, pp. 112-136, 2013.

[38] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1-122, 2011.

[39] P. Benner, R.-C. Li, and N. Truhar, "On the ADI method for Sylvester equations," Journal of Computational and Applied Mathematics, vol. 233, no. 4, pp. 1035-1045, 2009.

[40] Z. C. Lin, R. S. Liu, and Z. X. Su, "Linearized alternating direction method with adaptive penalty for low-rank representation," in Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pp. 1-7, 2011.

[41] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.

Wendong Wang (iD) and Jianjun Wang (iD)

School of Mathematics and Statistics, Southwest University, Chongqing 400715, China

Correspondence should be addressed to Jianjun Wang; wjj@swu.edu.cn

Received 17 March 2018; Revised 20 July 2018; Accepted 14 August 2018; Published 12 September 2018

Academic Editor: Seenith Sivasundaram

Caption: Figure 1: (a) Lenna image (256 x 256). (b) All the singular values of Lenna image are sorted from larger to smaller. A simple calculation will lead to the fact that the top 40 singular values contribute more than 80% information of all singular values.

Caption: Figure 2: The black points denote the local but nonedged entries, the red points denote the corresponding neighboring entries concerned in LTVA and our MSTV, and the grey points denote the remaining entries of matrices. Obviously, our MSTV takes more informative neighboring entries into consideration and hence yields a better performance.

Caption: Figure 3: The test images. All are 256 x 256 and will be numbered in order from 1 to 20, from left to right, and from top to bottom.

Caption: Figure 4: Penalty parameter [lambda] selection.

Caption: Figure 5: Convergence behavior of Algorithm 1.

Caption: Figure 6: Visual results on Lenna image with 60% missing ratio by different algorithms. (a) Original image. (b) Randomly masked image. (c) IRLS: PSNR=22.91dB, SSIM=0.707. (d) PSSV: PSNR=23.21dB, SSIM=0.696. (e) LTVNN: PSNR=24.91dB, SSIM=0.853. (f) SPC-TV: PSNR=25.24dB, SSIM=0.801. (g) SPC-QV: PSNR=27.50dB, SSIM=0.898. (h) Ours: PSNR=28.96dB, SSIM=0.936. The figure is better viewed in zoomed PDF.

Caption: Figure 7: Visual results on Lenna image with text by different algorithms. (a) Original image. (b) Text masked image. (c) IRLS: PSNR=22.14dB, SSIM=0.903. (d) PSSV: PSNR=24.41dB, SSIM=0.929. (e) LTVNN: PSNR=25.33dB, SSIM=0.962. (f) SPC-TV: PSNR=25.33dB, SSIM=0.941. (g) SPC-QV: PSNR=27.07dB, SSIM=0.966. (h) Ours: PSNR=29.12dB, SSIM=0.984.The figure is better viewed in zoomed PDF.

Caption: Figure 8: The desired text masked images. They are numbered in the same manner as we stated in Figure 3.
Table 1: PSNR|SSIM results on Lenna image with different missing
ratios by different algorithms.

              70%            65%            60%            55%

IRLS      21.89|0.595    22.49|0.661    22.91|0.707    23.13|0.747
PSSV      21.80|0.585    22.60|0.646    23.21|0.696    23.90|0.744
LTVNN     23.89|0.793    24.42|0.824    24.91|0.853    25.41|0.881
SPC-TV    24.29|0.743    24.88|0.776    25.24|0.801    25.72|0.829
SPC-QV    26.66|0.864    27.15|0.883    27.50|0.898    27.89|0.912
Ours      27.87|0.906    28.46|0.921    28.96|0.936    29.54|0.948

              50%

IRLS      23.34|0.774
PSSV      24.61|0.791
LTVNN     25.90|0.900
SPC-TV    26.06|0.852
SPC-QV    28.18|0.925
Ours      29.89|0.957

Table 2: PSNR|SSIM results on 20 test images with different missing
ratios by different algorithms.

                          Missing ratio=70%

          IRLS           PSSV          LTVNN          SPC-TV

1     27.58|0.802    27.88|0.821    29.32|0.902    29.14|0.864
2     21.34|0.586    21.43|0.593    23.37|0.768    23.50|0.715
3     22.03|0.596    22.12|0.606    24.29|0.804    24.70|0.761
4     19.87|0.470    20.19|0.487    21.95|0.645    21.55|0.592
5     22.77|0.630    23.25|0.644    24.67|0.755    24.71|0.735
6     22.72|0.623    22.56|0.609    24.55|0.766    24.86|0.742
7     20.76|0.582    20.91|0.592    22.64|0.804    23.11|0.734
8     24.26|0.683    24.14|0.689    25.48|0.839    26.86|0.808
9     20.40|0.558    20.33|0.557    22.01|0.687    22.14|0.667
10    23.39|0.621    23.25|0.614    25.09|0.730    25.34|0.731
11    19.19|0.539    19.60|0.571    21.69|0.798    21.53|0.690
12    17.87|0.525    17.92|0.528    19.80|0.696    19.95|0.658
13    24.69|0.619    24.64|0.616    27.07|0.783    26.83|0.742
14    24.00|0.668    23.89|0.666    25.53|0.777    25.94|0.766
15    23.44|0.595    23.71|0.605    25.64|0.735    25.38|0.707
16    19.73|0.545    19.72|0.546    21.43|0.750    21.92|0.690
17    19.76|0.514    19.59|0.504    21.94|0.704    21.96|0.653
18    20.12|0.555    19.96|0.543    22.11|0.748    22.67|0.708
19    21.83|0.604    21.72|0.599    23.63|0.809    24.35|0.758
20    20.27|0.577    20.38|0.584    22.01|0.748    22.21|0.703

                          Missing ratio=55%

          IRLS           PSSV          LTVNN          SPC-TV

1     29.45|0.887    29.98|0.905    30.87|0.946    30.06|0.899
2     22.91|0.750    23.21|0.745    24.53|0.859    24.75|0.815
3     23.36|0.743    24.58|0.780    25.88|0.890    26.20|0.850
4     21.15|0.658    21.03|0.643    22.65|0.769    22.18|0.718
5     24.17|0.766    24.61|0.783    25.71|0.851    25.68|0.830
6     23.95|0.758    24.54|0.763    25.94|0.859    26.05|0.829
7     22.31|0.743    22.98|0.752    24.04|0.889    24.55|0.824
8     25.81|0.810    27.13|0.830    27.11|0.906    28.58|0.875
9     21.67|0.715    21.77|0.714    23.05|0.803    23.20|0.783
10    24.47|0.750    25.07|0.772    26.38|0.836    26.62|0.832
11    21.00|0.710    21.15|0.702    22.89|0.878    22.72|0.781
12    19.09|0.683    19.66|0.705    20.97|0.814    21.27|0.782
13    25.94|0.764    26.25|0.760    28.17|0.865    27.83|0.830
14    25.35|0.790    25.92|0.807    26.93|0.868    27.41|0.857
15    24.75|0.744    24.91|0.746    26.54|0.831    26.24|0.806
16    21.12|0.706    21.64|0.707    22.77|0.853    23.24|0.795
17    20.94|0.670    21.51|0.684    23.20|0.821    23.19|0.770
18    21.24|0.696    22.64|0.733    23.96|0.856    24.39|0.816
19    23.00|0.750    24.39|0.770    25.43|0.898    26.09|0.848
20    21.64|0.734    22.13|0.740    23.20|0.848    23.60|0.813

          Missing ratio=70%             Missing ratio=60%

         SPC-QV          Ours           IRLS           PSSV

1     30.59|0.910    31.96|0.936    29.11|0.869    29.35|0.882
2     25.29|0.819    25.25|0.838    22.61|0.712    22.67|0.700
3     27.37|0.879    28.73|0.922    23.12|0.709    23.80|0.730
4     22.64|0.692    22.60|0.728    20.90|0.610    20.79|0.596
5     26.15|0.816    26.04|0.829    23.98|0.735    24.22|0.742
6     26.94|0.838    27.13|0.861    23.69|0.721    23.94|0.716
7     24.94|0.852    25.24|0.887    21.98|0.704    22.21|0.702
8     28.62|0.879    27.85|0.882    25.56|0.782    26.21|0.788
9     23.78|0.762    23.68|0.784    21.37|0.674    21.28|0.665
10    27.25|0.816    27.54|0.838    24.26|0.717    24.48|0.725
11    23.23|0.810    23.29|0.850    20.69|0.669    20.68|0.662
12    22.04|0.780    22.41|0.800    18.89|0.642    19.12|0.654
13    28.70|0.833    29.22|0.864    25.69|0.731    25.73|0.719
14    27.96|0.847    27.92|0.861    25.12|0.760    25.25|0.765
15    26.92|0.791    27.11|0.814    24.48|0.704    24.49|0.700
16    24.06|0.825    24.54|0.868    20.85|0.666    21.04|0.658
17    24.10|0.787    24.74|0.835    20.76|0.634    20.94|0.630
18    25.71|0.846    27.10|0.898    20.98|0.659    21.76|0.675
19    26.93|0.889    27.76|0.927    22.78|0.715    23.64|0.721
20    23.84|0.806    23.20|0.812    21.41|0.695    21.63|0.693

           Missing ratio=55%             Missing ratio=50%

         SPC-QV          Ours           IRLS           PSSV

1     31.04|0.930    34.03|0.973    29.90|0.901    30.74|0.923
2     26.44|0.887    26.65|0.906    23.15|0.782    23.80|0.787
3     28.66|0.924    30.47|0.958    23.59|0.772    25.42|0.824
4     23.34|0.796    23.44|0.826    21.28|0.696    21.26|0.687
5     27.06|0.883    27.30|0.902    24.36|0.793    25.02|0.819
6     27.90|0.892    28.12|0.909    24.08|0.783    25.01|0.799
7     26.21|0.905    27.00|0.944    22.37|0.772    23.48|0.790
8     30.04|0.918    29.39|0.931    26.04|0.831    28.08|0.861
9     24.77|0.849    24.93|0.871    21.84|0.749    22.19|0.757
10    28.45|0.888    29.01|0.906    24.62|0.778    25.66|0.812
11    24.29|0.872    24.42|0.911    21.17|0.743    21.60|0.742
12    23.44|0.868    23.97|0.887    19.31|0.721    20.38|0.758
13    29.58|0.891    30.55|0.920    26.11|0.792    26.70|0.796
14    29.19|0.906    29.52|0.922    25.52|0.813    26.65|0.843
15    27.85|0.867    28.52|0.893    24.94|0.773    25.25|0.781
16    25.29|0.891    26.10|0.928    21.28|0.738    22.22|0.752
17    25.20|0.865    26.08|0.903    21.09|0.706    22.00|0.732
18    27.18|0.905    28.91|0.943    21.36|0.727    23.50|0.783
19    28.44|0.931    29.61|0.962    23.19|0.777    25.20|0.815
20    25.23|0.883    24.58|0.890    21.77|0.764    22.67|0.783

                         Missing ratio=60%

         LTVNN          SPC-TV         SPC-QV          Ours

1     30.41|0.935    29.84|0.888    30.94|0.924    33.55|0.965
2     24.22|0.833    24.34|0.784    26.10|0.868    26.23|0.888
3     25.44|0.868    25.78|0.825    28.27|0.911    29.96|0.949
4     22.49|0.736    22.01|0.681    23.19|0.768    23.21|0.800
5     25.44|0.824    25.41|0.803    26.80|0.864    27.00|0.882
6     25.53|0.833    25.69|0.802    27.61|0.875    27.85|0.895
7     23.57|0.865    24.04|0.797    25.79|0.890    26.45|0.929
8     26.68|0.886    28.11|0.852    29.81|0.906    29.14|0.917
9     22.73|0.769    22.85|0.748    24.45|0.824    24.55|0.846
10    25.96|0.805    26.19|0.801    28.08|0.867    28.61|0.888
11    22.52|0.856    22.36|0.753    23.96|0.853    24.07|0.894
12    20.66|0.784    20.89|0.746    23.07|0.845    23.57|0.865
13    27.81|0.843    27.51|0.805    29.26|0.875    30.11|0.906
14    26.49|0.842    26.96|0.830    28.83|0.889    29.03|0.905
15    26.22|0.800    25.93|0.773    27.50|0.842    28.04|0.870
16    22.41|0.826    22.81|0.764    24.93|0.872    25.67|0.913
17    22.86|0.789    22.89|0.738    24.96|0.844    25.75|0.885
18    23.39|0.827    23.84|0.783    26.75|0.889    28.43|0.932
19    24.95|0.877    25.59|0.822    28.02|0.920    29.13|0.953
20    22.89|0.823    23.19|0.779    24.85|0.861    24.22|0.868

                          Missing ratio=50%

         LTVNN          SPC-TV         SPC-QV          Ours

1     31.47|0.957    30.46|0.910    31.45|0.937    34.76|0.978
2     24.88|0.881    25.15|0.842    26.84|0.905    27.10|0.922
3     26.35|0.910    26.69|0.873    29.04|0.936    30.94|0.965
4     22.82|0.800    22.36|0.753    23.56|0.823    23.68|0.851
5     25.95|0.873    25.93|0.854    27.31|0.900    27.61|0.918
6     26.26|0.880    26.37|0.852    28.12|0.905    28.33|0.920
7     24.28|0.906    24.78|0.848    26.40|0.917    27.36|0.954
8     27.62|0.922    29.14|0.892    30.49|0.927    29.80|0.942
9     23.36|0.833    23.55|0.815    25.09|0.872    25.25|0.890
10    26.71|0.860    27.02|0.858    28.81|0.905    29.45|0.922
11    23.09|0.896    22.92|0.806    24.46|0.887    24.65|0.923
12    21.39|0.846    21.82|0.818    23.99|0.892    24.56|0.909
13    28.40|0.884    28.08|0.851    29.80|0.906    30.82|0.932
14    27.26|0.887    27.73|0.876    29.49|0.919    29.90|0.934
15    26.77|0.855    26.52|0.832    28.12|0.887    28.86|0.911
16    23.16|0.878    23.66|0.825    25.69|0.908    26.56|0.941
17    23.52|0.848    23.55|0.804    25.51|0.885    26.44|0.918
18    24.40|0.880    24.88|0.844    27.60|0.920    29.49|0.953
19    25.88|0.917    26.53|0.871    28.67|0.941    29.94|0.968
20    23.49|0.872    23.93|0.840    25.58|0.900    25.00|0.909

Table 3: PSNR|SSIM results on reconstruction of 20 test images
with text by different algorithms.

          IRLS           PSSV          LTVNN          SPC-TV

1     28.56|0.956    30.47|0.975    30.91|0.981    29.78|0.965
2     22.13|0.911    23.96|0.930    24.43|0.955    24.62|0.939
3     22.27|0.895    25.45|0.944    25.41|0.963    25.78|0.948
4     21.14|0.897    21.68|0.904    22.69|0.933    22.50|0.920
5     23.60|0.919    25.18|0.943    25.33|0.951    25.40|0.945
6     22.85|0.902    24.94|0.932    25.74|0.956    25.56|0.941
7     21.14|0.906    23.30|0.933    23.20|0.959    23.70|0.937
8     24.93|0.926    29.08|0.962    27.33|0.969    28.72|0.956
9     21.12|0.905    22.70|0.926    22.94|0.940    23.38|0.935
10    23.84|0.912    26.49|0.948    26.04|0.949    26.76|0.950
11    20.20|0.895    21.32|0.911    22.21|0.958    22.07|0.924
12    18.12|0.892    20.51|0.926    20.64|0.943    20.97|0.933
13    25.32|0.920    27.09|0.938    27.95|0.959    27.72|0.947
14    24.31|0.920    26.74|0.951    26.67|0.958    27.18|0.953
15    24.30|0.913    25.41|0.929    26.37|0.948    26.06|0.938
16    20.19|0.890    22.56|0.920    22.41|0.949    22.97|0.931
17    19.76|0.874    21.80|0.908    22.77|0.942    22.61|0.925
18    19.77|0.872    23.90|0.933    23.72|0.953    23.97|0.937
19    21.43|0.895    24.76|0.931    25.19|0.967    25.12|0.943
20    20.69|0.902    22.71|0.934    22.80|0.951    23.02|0.941

         SPC-QV          Ours

1     30.74|0.973    34.17|0.992
2     26.13|0.962    26.65|0.972
3     27.70|0.971    30.19|0.987
4     23.43|0.939    23.58|0.950
5     26.62|0.960    26.96|0.969
6     27.30|0.964    27.98|0.973
7     25.00|0.963    26.48|0.983
8     30.12|0.971    31.02|0.979
9     24.66|0.953    25.07|0.962
10    28.19|0.964    29.34|0.974
11    23.41|0.951    23.72|0.971
12    22.62|0.956    23.61|0.967
13    29.28|0.965    30.72|0.977
14    28.58|0.968    29.21|0.977
15    27.42|0.955    28.50|0.968
16    24.80|0.961    26.33|0.977
17    24.27|0.953    25.57|0.969
18    26.48|0.966    29.24|0.984
19    27.25|0.971    29.55|0.988
20    24.44|0.961    24.18|0.967
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Wang, Wendong; Wang, Jianjun
Publication:Discrete Dynamics in Nature and Society
Date:Jan 1, 2018
Words:8304
Previous Article:Boundary Layer Effects for the Nonlinear Evolution Equations with the Vanishing Diffusion Limit.
Next Article:Numerical Simulation of One-Dimensional Fractional Nonsteady Heat Transfer Model Based on the Second Kind Chebyshev Wavelet.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters