Printer Friendly

Sparse representation based two-dimensional bar code image super-resolution.

Abstract

This paper presents a super-resolution reconstruction method based on sparse representation for two-dimensional bar code images. Considering the features of two-dimensional bar code images, Kirsch and LBP (local binary pattern) operators are used to extract the edge gradient and texture features. Feature extraction is constituted based on these two features and additional two second-order derivatives. By joint dictionary learning of the low-resolution and high-resolution image patch pairs, the sparse representation of corresponding patches is the same. In addition, the global constraint is exerted on the initial estimation of high-resolution image which makes the reconstructed result closer to the real one. The experimental results demonstrate the effectiveness of the proposed algorithm for two-dimensional bar code images by comparing with other reconstruction algorithms.

Keywords: Two-dimensional bar code, super-resolution, sparse representation, texture feature

1. Introduction

The use of traditional one-dimensional bar codes is greatly limited by their small information capacity. For this reason, the two-dimensional bar code was developed. The two-dimensional bar codes contain large information with high density, as shown in Fig. 1. It is a QR (Quick Response) code [1] containing the lyrics of Marseillaise. Therefore, the high-resolution image is required to ensure the recognition performance. Most existing schemes solve this problem by increasing the resolution of the camera. However, such approach has quite a number of limitations [1]. In addition, with the development of mobile communication technology, users can upload two-dimensional bar code images to the server for recognition in some applications [1]. Due to the lack of adequate guidance for users, the resolution of uploaded images is always too low and it may lead to unsuccessful recognition. Consequently, it is necessary to conduct super-resolution reconstruction for the recognition of two-dimensional bar code images, which can optimize the image by algorithms to get rid of the strict requirements of the hardware.

[FIGURE 1 OMITTED]

Super-resolution image recovery approaches can be divided into three categories [2]: interpolation-based methods [3-5], reconstruction-based methods [6, 7] and learning-based methods [8-13]. The interpolation-based methods reconstruct the high resolution images simply by the known neighbours and may not deal with the blurry low resolution input images [2]. The reconstruction-based approaches establish the image degradation model between the low-resolution and high-resolution images, and define the prior knowledge of the image to get the high-resolution image from the corresponding low-resolution image. However, the parameters in the degradation model are difficult to estimate. The image super-resolution approach based on machine learning is to get the relationship between the low-resolution and high-resolution images by the similarity between the high frequency information of different images. The relationship serves as the prior knowledge to guide the reconstruction of high-resolution image. The prior knowledge of these learning-based methods is obtained from large quantity of samples. This kind of approaches can obtain more high frequency information, so the effect is better than reconstruction-based approaches.

The key to the learning-based methods is the description of the relationship between high-resolution and low-resolution images. In this aspect, sparse representation shows a great advantage in the establishment of the relationship [2, 10]. It can establish the sparse association between high-resolution and low-resolution images well. The study to the visual imaging of primates shows that the visual cortex receives the signals from outside by the principle of sparse representation. In addition, the research [14] shows that the visual system uses the least amount of neurons to receive the natural information, which means the natural images are sparse.

The super-resolution image reconstruction based on sparse representation is to reconstruct the high-resolution image by the model of sparse representation, when given the known low-resolution image. Yang [10, 15] presented a sparse-coding method of image super-resolution for raw image patches by sparse representation in 2008 and improved it in 2010. Zeyde [16] embarked from Yang's work and similarly assumed a local Sparse-Land model on image patches. His research successfully simplified the computational complexity and the algorithm architecture. Dong [17] proposed a centralized sparse representation (CSR) model for image restoration in 2011, which takes a centralized sparsity constraint into account. In 2013, a model called nonlocally centralized sparse representation was raised for image restoration [18]. Adaptive sparse domain selection and adaptive regularization were introduced for image super-resolution reconstruction [19]. A multiple geometric dictionaries based clustered sparse coding method was proposed for single image super-resolution reconstruction [20]. Liu [21] presented a sparse representation algorithm with different morphologic regularizations. Peleg [22] used a statistical prediction model based on sparse representation to avoid any invariance assumption. However, these methods were designed for natural images but not for the two-dimensional bar code images. The two-dimensional bar code images have special structure features. In this paper, we take advantage of this structure prior information to design an algorithm specifically for it.

The remainder of the paper is organized as follows. Section 2 introduces the concrete concepts and steps on sparse representation based super-resolution reconstruction. The feature extraction and dictionary learning are stated in Section 3. Experimental results are provided in Section 4. Finally, Section 5 concludes this paper.

2. Image Super-resolution from Sparsity

2.1 Sparse Representation

The basic idea of sparse representation [23] is that the natural signal is supposed to be compressed represented by the linear combination of predefined atoms. According to the basic thought of sparse representation, a signal x [member of] [R.sup.n] can be represented by a finite number of atoms:

x = D[alpha] = [[SIGMA].sub.i][d.sub.i][[alpha].sub.i], (1)

where D [member of] [R.sup.nxL](n << L), D = [[d.sub.1],[d.sub.2], ..., [d.sub.L]], in which [d.sub.i] is the atom of matrix D and [alpha] = [[[alpha].sub.1],[[alpha].sub.2], ..., [[alpha].sub.L].sup.T] [member of] [R.sup.L] is the coefficients vector. Because of n << L, the matrix D is over-complete and Eq. (1) is underdetermined. In other words, there are infinite solutions to this equation. The purpose of sparse representation is to find the sparest representation of signal x, namely the number of non-zero entries in coefficients vector [alpha] is least. Therefore, the sparse representation problem can be described as the following expression:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where [??] calculates the [L.sub.0] norm and provides the number of non-zero elements. [alpha] is the sparse representation of x, and D is the sparse transform matrix, namely sparse dictionary. In practical applications, it is inevitable that there is some noise in signals. Thus, it is feasible to meet Eq. (2) in a certain error range. Supposing E is the sparse representation error, Eq. (2) can be transformed to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

In Eq. (3), for [L.sub.0] norm, it is proved that the solution to the optimization problem of [L.sub.0] norm is unique when the sparse coefficients [alpha] are sparse enough [24]. However, finding the sparest coefficients is an NP-hard problem. Besides, the optimization problems of [L.sub.0] and [L.sub.1] norm have the same solution under RIP condition [25]. Considering the optimization problem of [L.sub.1] norm can improve the algorithm efficiency greatly, Eq. (3) can be further transformed to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

The sparse model above can be solved by matching pursuit algorithm, orthogonal matching pursuit algorithm and basis pursuit algorithm. By these methods, we can obtain the sparest coefficients [alpha] corresponding to signal x, when sparse dictionary D is given.

2.2 Super-resolution Reconstruction

Because a digital image is a special two-dimensional signal, we can conduct sparse representation on digital images. Super-resolution image reconstruction based on sparse representation is to get the corresponding high-resolution image Y when given low-resolution image X [15]. The fundamental steps of super-resolution reconstruction are summarized as follows: Firstly, split the input low-resolution image X into small image patches x, and ensure that there is overlap between adjacent patches in order to improve the accuracy of reconstruction. Secondly, extract appropriate features x to represent the original image patches. Thirdly, use the dictionary of low-resolution image feautre patches [D.sub.L] to conduct sparse representation on image feature patch [??], to obtain the sparse representation coefficients [alpha]. Fourthly, reconstruct the high-resolution patch corresponding to [??] by [alpha] and the dictionary of high-resolution image patches [D.sub.H]. Fifthly, after getting all the high-resolution patches, the initial high-resolution image [Y.sub.0] corresponding to X can be composed of these patches. Lastly, enforce global constraint on [Y.sub.0] to achieve the final outcome Y.

According to the theory of sparse representation in Section 2.1, the sparse representation coefficients [alpha] of image feature patch x can be solved by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

where [epsilon] is the error of sparse representation, [D.sub.L] is the dictionary of low-resolution image feauture patches, [??] is the image feature patch and the sparse coefficients are indicated by [alpha]. Then, according to Lagrange multipliers theory, Eq. (5) can be transformed to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

where [mu] is regularization parameter to balance the error and sparsity of sparse representation. Through Eq. (6), the optimum solution of coefficients [alpha] can be solved and denoted as [[alpha].sup.*]. Then high-resolution patch y can be obtained as follows

y=[D.sub.H][[alpha].sup.*], (7)

where [D.sub.H] is the dictionary of high-resolution image patches. The initial estimated high-resolution image [Y.sub.0] is achieved by combining each patch y.

Steps given above may not take the global image into consideration. Therefore, we need to enforce global constraint after getting high-resolution image [Y.sub.0]. At last, considering the actual imaging process, to make the reconstructed high-resolution image more realistic, the reconstructed high-resolution image and the input low-resolution image should satisfy the actual imaging process model in an acceptable range. Thus, the target function of global constraint can be defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

where D represents down-sampling, B represents image blur and [Y.sup.*] is the final reconstruction result. [Y.sup.*] can be computed by back-projection algorithm. The result gives consideration to the integrity of the image and is superior to the result obtained by local constraint.

3. Feature Extraction and Dictionary Learning

3.1 Feature Extraction

3.1.1 Texture Feature

Because the two-dimensional bar code images have obvious texture feature, the texture reflects the structure information of the two-dimensional bar code images. We can take advantage of the prior information of the two-dimensional bar code images by extracting its texture feature. In addition, the texture features change slightly with the resolution. It means the texture characteristics of the low-resolution and high-resolution images are similar. Considering the above factors, we use texture property as a feature to construct the link of high-resolution and low-resolution patches of two-dimensional bar code images. Structure and statistical methods are two common approaches of extracting texture feature of images. Structure method is suitable for the images with regular texture while statistical method is used in the images with obvious but not regular texture feature. Therefore, statistical method is more suitable for the texture feature extraction of the two-dimensional bar code images. Common statistical methods are gray histogram arithmetic and gray level co-occurrence matrix. In our algorithm, local binary pattern (LBP) is used to describe the texture feature of the two-dimensional bar code images.

LBP is a powerful operator to describe the image local texture [26]. The specific idea of LBP is as follows: Firstly, fix the size of window which is always defined as 3 x 3. It means the eight-neighborhood of one pixel is concerned. Secondly, let the gray level of the central point of the window be the threshold and binarize the other pixels of the window. If the gray level is larger than the threshold, the pixel will be labled as 1, otherwise it will be labled as 0.

Lastly, the LBP value of the window can be obtained by weighted sum of these labels, as the texture feature of the window. LBP can be calculated as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

where ([x.sub.c],[y.sub.c]) is the coordinate of the central point of the window, [g.sub.c] is the gray level of the central point and p is the number of the points in a window except the central point. For a 3x3 window, p is equal to 8 and [g.sub.i] (i = 0, ...,p - 1) are the gray levels of these points. The result of Eq. (9) is the LBP value of point ([x.sub.c],[y.sub.c]).

LBP method describes the local texture feature of the images well and it has advantage on robustness and calculation speed. Therefore, we implemented LBP as one of the reconstruction features in our algorithm.

3.1.2 Edge Gradient Feature

The edge gradient features are commonly extracted by a series of differential operators based on derivative information. According to the order of the partial derivative, the differential operator method is mainly divided into two categories, namely first-order and second-order differential methods [27]. First-order differential operators mainly include Sobel operator, Kirsch operator, Roberts operator, Prewitt operator, etc.. Second-order differential operator mainly includes Laplacian operator, etc.. Kirsch operator is used to extract the edge gradient feature of the images in our algorithm because of distinct edge direction characteristics in two-dimensional bar code images.

Kirsch edge detection operator not only can detect the edge of the image, but also provide the precise gradient direction, which is very significant for the two-dimensional bar code images reconstruction. Kirsch operator is composed of eight convolution kernels. These convolution kernels detect the edges in eight directions respectively, which are 0[degrees], 45[degrees], 90[degrees], 135[degrees], 180[degrees], 225[degrees], 270[degrees], 315[degrees]. Let [Out.sub.k], k = 0, ...,7 be the convolution results of the eight templates acting on the image patch respectively. The gradient of edge can be obtained by

M(i, j) = max{|[out.sub.k]|}, k = 0,...,7, (10)

where | * | computes absolute value.

3.1.3 Feature Extraction Operation

According to Section 2, the input low-resolution image features are extracted by feature extraction operator and the sparse dictionary [D.sub.L] is obtained by a various of low-resolution image feature patches. Feature extraction determines the form of image patches to be sparse represented. The more suitable features are selected, the better reconstruction result will be displayed.

Besides the texture feature and the edge gradient feature, we also select second-order derivatives as the features of low-resolution patches. Two filters used to extract the second-order derivatives are defined as

[H.sub.1] = [-1,0,2,0,-1], (11)

[H.sub.2] = [[-1,0,2,0,-1].sup.T].(12)

The result of feature extraction can be defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

where IMG represents the input low-resolution image and * computes the convolution. L and M are the results of Eq. (9) and Eq. (10), respectively. [H.sub.1] and [H.sub.2] are the filters defined in Eq. (11) and Eq. (12), respectively. [??] is used in Eq. (6) to compute the coefficients [alpha] and composes the input of dictionary learning.

3.2 Dictionary Learning

The training set of dictionary learning is composed of high-resolution image patch and corresponding low-resolution image feature patch pairs. Let the patch pairs be P = {[X.sub.L], [Y.sub.H]}, where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the set of low-resolution image feature patches and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the set of corresponding high-resolution image patches. The sparse representation of corresponding low-resolution and high-resolution patch becomes the same by joint dictionary learning of [X.sub.L] and [Y.sub.H]. Firstly, conduct sparse representation for [X.sub.L] and [Y.sub.H], respectively:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

where [alpha] is the sparse representation of [X.sub.L] and [Y.sub.H], [D.sub.L] and [D.sub.H] are the dictionaries of [X.sub.L] and [Y.sub.H], respectively. [[mu].sub.1] and [[mu].sub.2] are regularization parameters.

Afterwards, combine Eq. (14) and Eq. (15) to unify the sparse representation of [X.sub.L] and [Y.sub.H], and the equations can be further transformed into

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

where M and N are the dimensions of the high-resolution and low-resolution image feature patches in vector form, respectively. There are two unknown variables in Eq. (16), so the problem is non-convex. However, once one of the variables is fixed, the optimization problem becomes convex. Therefore, the problem can be solved as follows: Firstly, fix [D.sub.c] and compute the sparest representation [alpha]. Then, fix the result [alpha] and compute the optimum solution to [D.sub.c]. Lastly, repeat the two steps above until the convergence condition is met.

So far, the whole algorithm framework of the image super-resolution based on sparse representation is clear and listed in Algorithm 1. Using Algorithm 1, we can reconstruct the high-resolution image corresponding to the input low-resolution image by sparse representation.
Algorithm 1. Image Super-resolution based on Sparse Representation

(1) Extract the texture feature of the two-dimensional bar code images
    by LBP operator.
(2) Extract the edge features of the two-dimensional bar code images by
    Kirsch edge detection operator and two second-order derivatives.
(3) Train the dictionaries of low-resolution and high-resolution image
    patches, and denoted by [D.sub.L] and [D.sub.H], respectively.
(4) For each patch of the low-resolution image X, compute the optimum
    estimation [a.sub.*] of sparse representation by Eq. (6). Then
    generate the high-resolution image patch y by Eq. (7).
(5) Combine each high-resolution image patch y to the initial estimation
    of high-resolution image [Y.sub.0].
(6) According to Eq. (8), compute the final high-resolution image
    [Y.sup.*] which meets global reconstruction constraint by
    back-projection algorithm.


4. Experimental Results

A commonly used performance indicator PSNR (Peak Signal to Noise Ratio) is adopted in the experiments to measure the quality of the reconstructed high resolution images. The higher PSNR value is, the better effect of image reconstruction is, and vice versa. All simulations are performed on a PC with i7-6700 3.4GHz CPU and 8 GB memory. Matlab is used as the develop tool in our experimental studies.

The proposed algorithm is compared with other super-resolution reconstruction algorithms to demonstrate its effectiveness. In the process of dictionary learning, the number of samples and the size of dictionary are two important parameters which have significant influence on the results. The impact of the two parameters on the reconstruction will be analyzed. The improvement of recognition rate and performance of different features are investigated in the experimental studies. In addition, influence of noise in the input images is also conducted in this section.

4.1 Comparison of Different Algorithms

To demonstrate the effectiveness of the proposed algorithm in this paper, it is compared with three classical or state-of-the-art image reconstruction methods: Bicubic, Yang [15] and BP [28]. In this experiment, the parameter settings are as follows: the dictionary size is 512, the number of samples is 18000, the size of image patch is 5 x 5 and the regularization parameter is 0.15. The images are magnified by a factor of 2.

In this section, 1000 QR Code images with original resolution from 320 X 320 to 640 x 640 are collected for experiment data. The QR Code has 40 versions [29]. We divide these bar code images into 10 data sets marked as data set 0 to data set 9. Each data set is composed of 100 images of 4 versions. For more representative and accurate results, all 10 date set are run 10 times independently. The average results are listed in Table 1. The results indicate that our method is competitive and performs better than Bicubic, Yang and BP methods. Yang and BP methods are not designed for the two-dimensional bar code. Our method is designed specifically for it, so it can obtain better results.

To intuitively compare the perforamce of the four algorithms, the visual comparisons of these methods are shown in Fig. 2. Fig. 2(a) is the input low-resolution image and is magnified by a factor of 2. The results of Bicubic, Yang, BP and our method are shown in (b), (c), (d) and (e), respectively. We can observe that the result of Bicubic method is too smooth and seems blurring and the results of Yang and BP have more oscillations in the image than the result of our method. Comparing these four approaches, our method achieves the best result.

Through comparisons of the two aspects above, namely the PSNR value and visual effect, the experimental results show that the method proposed in this paper is effective in super-resolution reconstruction of two-dimensional bar code images.

[FIGURE 2 OMITTED]

4.2 Influence of Number of Samples

The analysis of number of samples on super-resolution reconstruction of two-dimensional bar code images is conducted in this section. We increase the number of samples from 2000 to 24000 with the interval 2000, while the other parameters remain unchanged. We conduct the super-resolution reconstruction experiments with the proposed method and the results are plotted in Fig. 3. As shown in this figure, with the increase of the number of samples, the PSNR value raises and tends to be stable at last.

[FIGURE 3 OMITTED]

4.3 Influence of Dictionary Size

In order to explore the influence of the dictionary size on super-resolution reconstruction of two-dimensional bar code images, we fix the other parameters, and change the dictionary size from 128 to 2048 in the experiments. From the results in Fig. 4, the PSNR value reaches its maximum at the dictionary size of 512, with the variation of the sparse dictionary size.

[FIGURE 4 OMITTED]

4.4 Improvement of the Recognition Rate

As we all know, the image's density of the two-dimensional bar code is important for recognition. The module width [29] is used to define the density of the two-dimensional bar code. If the module width is small, the image's density will be high which is difficult to recognize. Here, we test the improvement of the recognition rate after super-resolution reconstruction with our method.

In this experiment, 400 QR code images with various module widths are collected. We reconstruct the images using the super-resolution algorithm proposed in this paper and then recognize them with a famous bar code decoder ClearImage [30]. The recognition rates before and after super-resolution are given in Table 2. As we can see from the results, the proposed super-resolution algorithm can effectively improve the recognition rate when the module width is small.

4.5 Performance of Different Features

Three parts of features are used in the proposed algorithm, which are LBP operator, Kirsch operator and second-order derivatives. In this section, experiments are conducted to analyse the effects of different features in the proposed algorithm. Three data sets of two-dimensional bar code images with different densities are used in the experiments, which are marked as data set A to data set C respectively. Each data set constists of ten images. Different combinations of the three operators are adopted to reconstruct the high resolution images. The mean results of the three data sets are shown in Table 3. If we apply second-order derivatives alone, it performs worst in any other compared features. Combination of LBP and second-order derivatives performs better than that second-order derivatives. Performance of combination of Kirsch and second-order derivatives is much better. It shows LBP and Kirsch are both effective. Furthermore, we apply all three operators together and check their performance on three data sets, which outperforms among all the combinations.

4.6 Influence of the Noise

In real applications, the image data is always mixed with noise. We design the experiments to test the robustness of the proposed algorithm to noise. The data sets used in this section are the same as above. Increasing levels of Gaussian noise are added to the input images. The mean of Gaussian noise is 0 and the standard deviation ranges from 0 to 10. Table 4 shows the PSNRs of the reconstructed high resolution images from the noisy inputs using three algorithms. Our method performs better than Bicubic and Yang in all three data sets, which demonstrates its robustness to the different levels of noise.

5. Conclusion

This paper proposes a super-resolution image reconstruction method based on sparse representation for two-dimensional bar code images. Considering the features of the two-dimensional bar code images, we select texture feature, edge gradient feature and two second-order derivatives to extract the features, which are suitable for two-dimensional bar code images. Extensive experimental results show that our method is effective in the super-resolution reconstruction of two-dimensional bar code images. The influence of dictionary size, number of samples, noise and improvement of recognition rate are also conducted and discussed. Furthermore, different features in the algorithm are also investigated in this paper.

6. Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No.61375021, No.61203246 and No.41301407; Natural Science Foundation of JiangSu Province under Grant No.BK20131365; and Fundation of Center in NUAA under Grant KFJJ20151608.

The authors would like to thank referees for their comments that helped to clarify the ideas of the paper. The authors are also grateful to PartiTek Inc. and Newland Auto-ID Inc. for providing the hardware and data in the analysis.

References

[1] H. Yang, A. C. Kot and X. Jiang, "Binarization of low-quality barcode images captured by mobile phones using local window of adaptive location and size," IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 418-425, 2012. Article (CrossRef Link)

[2] C. Ren, X. He, Q. Teng, Y. Wu, and T. Q. Nguyen, "Single Image Super-Resolution Using Local Geometric Duality and Non-Local Similarity," IEEE Transactions on Image Processing, vol. 25, no. 5, pp. 2168-2183, 2016. Article (CrossRef Link)

[3] K. Hung and W. Siu, "Robust soft-decision interpolation using weighted least squares," IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 1061-1069, 2012. Article (CrossRef Link)

[4] X. Zhang and X. Wu, "Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation," IEEE Transactions on Image Processing, vol. 17, no. 6, pp. 887-896, 2008. Article (CrossRef Link)

[5] L. Zhang and X. Wu, "An edge-guided image interpolation algorithm via directional filtering and data fusion," IEEE Transactions on Image Processing, vol. 15, no. 8, pp. 2226-2238, 2006. Article (CrossRef Link)

[6] K. Zhang, X. Gao, D. Tao, and X. Li, "Single image super-resolution with non-local means and steering kernel regression," IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4544-4556, 2012. Article (CrossRef Link)

[7] Z. Lin and H. Shum, "Fundamental limits of reconstruction-based superresolution algorithms under local translation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 83-97, 2004. Article (CrossRef Link)

[8] H. Chang, D. Yeung and Y. Xiong, "Super-resolution through neighbor embedding," in Proc. of Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, pp. I-I, 2004. Article (CrossRef Link)

[9] W. T. Freeman, T. R. Jones and E. C. Pasztor, "Example-based super-resolution," IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 56-65, 2002. Article (CrossRef Link)

[10] J. Yang, J. Wright, T. Huang, and Y. Ma, "Image super-resolution as sparse representation of raw image patches," in Proc. of Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1-8, 2008. Article (CrossRef Link)

[11] K. I. Kim and Y. Kwon, "Single-image super-resolution using sparse regression and natural image prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 6, pp. 1127-1133, 2010. Article (CrossRef Link)

[12] C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295-307, 2016. Article (CrossRef Link)

[13] K. Zhang, X. Gao, X. Li, and D. Tao, "Partially supervised neighbor embedding for example-based image super-resolution," IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 2, pp. 230-239, 2011. Article (CrossRef Link)

[14] B. A. Olshausen, "Emergence of simple-cell receptive field properties by learning a sparse code for natural images," Nature, vol. 381, no. 6583, pp. 607-609, 1996. Article (CrossRef Link)

[15] J. Yang, J. Wright, T. S. Huang, and Y. Ma, "Image super-resolution via sparse representation," IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861-2873, 2010. Article (CrossRef Link)

[16] R. Zeyde, M. Elad and M. Protter, "On single image scale-up using sparse-representations," in Proc. of International Conference on Curves and Surfaces, pp. 711-730, 2010. Article (CrossRef Link)

[17] W. Dong, L. Zhang and G Shi, "Centralized sparse representation for image restoration," in Proc. of 2011 International Conference on Computer Vision, pp. 1259-1266, 2011. Article (CrossRef Link)

[18] W. Dong, L. Zhang, G Shi, and X. Li, "Nonlocally centralized sparse representation for image restoration," IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1620-1630, 2013. Article (CrossRef Link)

[19] W. Dong, L. Zhang, G. Shi, and X. Wu, "Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization," IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 1838-1857, 2011. Article (CrossRef Link)

[20] S. Yang, M. Wang, Y. Chen, and Y. Sun, "Single-image super-resolution reconstruction via learned geometric dictionaries and clustered sparse coding," IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 4016-4028, 2012. Article (CrossRef Link)

[21] W. Liu and S. Li, "Sparse representation with morphologic regularizations for single image super-resolution," Signal Processing, vol. 98, no. 5, pp. 410-422, 2014. Article (CrossRef Link)

[22] T. Peleg and M. Elad, "A statistical prediction model based on sparse representations for single image super-resolution," IEEE Transactions on Image Processing, vol. 23, no. 6, pp. 2569-2582, 2014. Article (CrossRef Link)

[23] J. Mairal, M. Elad and G. Sapiro, "Sparse representation for color image restoration," IEEE Transactions on Image Processing, vol. 17, no. 1, pp. 53-69, 2008. Article (CrossRef Link)

[24] D. L. Donoho and X. Huo, "Uncertainty principles and ideal atomic decomposition," IEEE Transactions on Information Theory, vol. 47, no. 7, pp. 2845-2862, 2001. Article (CrossRef Link)

[25] E. J. Candes and T. Tao, "Near-optimal signal recovery from random projections: Universal encoding strategies?" IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406-5425, 2006. Article (CrossRef Link)

[26] T. Ojala, M. Pietikainen and T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002. Article (CrossRef Link)

[27] R. C. Gonzalez and R. E. Woods, Digital image processing, Prentice hall Upper Saddle River, NJ, 2008.

[28] G Polatkan, M. Zhou, L. Carin, D. Blei, and I. Daubechies, "A bayesian nonparametric approach to image super-resolution," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 2, pp. 346-358, 2015. Article (CrossRef Link)

[29] ISO/IEC18004, "Information technology - automatic identification and data capture techniques - qrcode bar code symbology specification," 2015.

[30] InliteResearch, "Clearimage barcode recognition sdk," https://www.inliteresearch.com/barcode-recognition-sdk.php, 2016.

Yiling Shen (1), Ningzhong Liu (1) and Han Sun (1)

(1) College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics

Nanjing, Jiangsu, 210016 P. R. China

[e-mail: lnz_nuaa@163.com]

(*) Corresponding author: Ningzhong Liu

Received August 11, 2016; revised December 7, 2016; revised January 21, 2017; accepted February 7, 2017; published April 30, 2017

Yiling Shen received the B.S. degree in Computer Science and Technology from Nanjing University of Aeronautics and Astronautics, China in 2014. She is currently pursuing the M.S. degree in Nanjing University of Aeronautics and Astronautics. Her research interests include pattern recognition, image processing.

[ILLUSTRATION OMITTED]

Ningzhong Liu was born in 1975. He received the B.S. degree in Computer Engineering from Nanjing University of Science and Technology, Nanjing, China in 1998, and received the Ph.D. degree in Pattern Recognition and Intelligent Systems from Nanjing University of Science and Technology in 2003. He is currently a professor of the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China. His research interests include pattern recognition, image processing.

[ILLUSTRATION OMITTED]

Han Sun received the B.S. degree in Computer Engineering from Nanjing University of Science and Technology, Nanjing, China in 2000, and received the Ph.D. degree in Pattern Recognition and Intelligent Systems from Nanjing University of Science and Technology in 2005. He is currently an associate professor of the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China. His research interests include machine learning, image processing

[ILLUSTRATION OMITTED]
Table 1. PSNR of Different Algorithms

Image data               PSNR
            Bicubic  Yang   BP     Our Method

data set 0  29.87    32.05  31.23  32.21
data set 1  22.35    24.45  24.67  29.06
data set 2  20.07    21.88  20.73  27.25
data set 3  18.78    21.67  20.42  26.22
data set 4  18.09    19.67  20.70  26.43
data set 5  17.98    19.97  18.95  24.49
data set 6  16.20    17.22  17.32  21.30
data set 7  16.15    17.73  17.85  21.87
data set 8  17.15    18.59  17.80  20.90
data set 9  14.58    16.70  15.90  17.95

Table 2. Improvement of the Recognition Rate

Module width  Recognition rate
              Before super-resolution  After super-resolution

1.0           18.25%                    69.50%
1.5           38.50%                    81.25%
2.0           62.00%                    94.50%
2.5           72.75%                    97.50%
3.0           95.75%                   100%

Table 3. PSNR of Different Features

                                            PSNR
Different Features              data set A  data set B  data set C

second-order derivatives        26.41       30.24       32.45
LBP + second-order derivatives  27.65       31.68       33.90
Kirsch + second-order           27.82       32.43       34.87
derivatives
LBP + Kirsch + second-order     28.29       32.65       35.37
derivatives

Table 4. PSNR of Noisy Inputs

Noise        data set A            data set B
Level  Bicubic  Yang   Ours   Bicubic  Yang   Ours

 0     25.43    26.97  28.29  28.80    30.89  32.65
 4     25.34    26.85  28.14  28.59    30.56  32.17
 6     25.22    26.72  27.95  28.33    30.18  31.63
 8     25.07    26.53  27.71  28.00    29.70  31.01
10     24.89    26.32  27.42  27.64    29.22  30.35

Noise       data set C
Level  Bicubic  Yang   Ours

 0     30.75    33.30  35.37
 4     30.43    32.73  34.54
 6     30.05    32.13  33.68
 8     29.57    31.42  32.68
10     29.08    30.69  31.74
COPYRIGHT 2017 KSII, the Korean Society for Internet Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Shen, Yiling; Liu, Ningzhong; Sun, Han
Publication:KSII Transactions on Internet and Information Systems
Article Type:Report
Date:Apr 1, 2017
Words:5779
Previous Article:Viewpoint invariant person re-identification for global multi-object tracking with non-overlapping cameras.
Next Article:A noisy-robust approach for facial expression recognition.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters