# Image Recognition Based on Two-Dimensional Principal Component Analysis Combining with Wavelet Theory and Frame Theory.

1. IntroductionImage recognition is an important area of artificial intelligence, and the accuracy of image recognition is getting higher and higher. Principal component analysis (PCA) is a common linear transformation method for extracting features in image recognition. This algorithm has been developed very well. However, the calculation of this algorithm is large. In the face recognition technology, one-dimensional PCA algorithm needs to transform the two-dimensional image matrix into one-dimensional vector. The dimension of the image vector is as high as 10304, if the resolution of the image is 112 * 92. And the larger the data set is, the higher the dimension of the image vector will be. When the data set of image class is 100000 and the image matrix is 10304 * 100000, the calculation of one-dimensional PCA directly calculating the image matrix is large. This makes the calculation of the entire feature extraction process very large and leads to a high dimensional space and the relative increase of computational complexity. This requires large calculation of the entire feature extraction process. The computational complexity of the large dimension of the small sample, which also makes the image lose the structure information, is not conducive to accurate detection and recognition. For the defects of one-dimensional PCA, [1] proposes a face recognition algorithm based on 2DPCA, which is a linear unsupervised statistical method. In general, the dimension of face image is large, while the calculation of face image processing is very large. The use of one-dimensional PCA algorithm leads to the increase in computational complexity and time-consuming, so 2DPCA is introduced to deal with the images [2]. 2DPCA algorithm is a feature extraction method that directly deals with the image matrix and overcomes the problem of converting a two-dimensional image matrix into a one-dimensional vector by using the one-dimensional PCA extracts feature. To a great extent, the amount of calculations is reduced. 2DPCA also makes use of the differences between samples, effectively preserves the sample structure information, adds the identification information, and becomes a new research hot spot [3]. Reference [4] explains the application of linear transformation in matrix theory. It uses 2DPCA to find out the feature vectors and uses the classical one-dimensional PCA technology to make further compression, so that the dimension is reduced. The result shows that the direct covariance matrix can be obtained directly for the image, which is more effective in recognition rate. In [5-9], the classical 2DPCA algorithm has been improved, but the intra-class feature vectors are not considered fully. The image recognition algorithms are constantly updated and optimized, the classical PCA algorithm, the improved PCA algorithm and improved 2DPCA algorithm, the SVM algorithm, and the convolutional neural network algorithm can be used for face recognition. References [10-12] first block the image and then use the 2DPCA algorithm to extract features for each block, at last, use the information fusion method to complete feature extraction. These algorithms use only local information, they lost information between blocks on the original face image easily. As a result, the information extracted is not complete enough. In [13], a face recognition method is proposed based on the average partition of 2DPCA. In this method, the image matrix is first divided into blocks, the intraclass normalized blocks are used to construct the overall distribution matrix; then the projection is carried out, which can reduce the dimension of the feature and avoid the use of singular value decomposition and reduce the recognition distance of the samples in the class. The experimental result shows that the recognition rate of this method is higher than 2DPCA algorithm. The above algorithms are not processed by wavelet transform, and they are processed directly by 2DPCA algorithm on the image processing, they cannot solve the external influence effectively (such as the changes of expressions and posture on the ORL face database). So the accuracy of the extracted features is not high. In [14], by combining the advantages of WT and 2DPCA, a face recognition algorithm is presented. First, in this algorithm, the firstorder wavelet transform is used to decompose the image, reduce the noise and increase the feature information, and solve the external influence (such as the changes of the expression and posture on the ORL face database). Then the 2DPCA algorithm is used to reduce the images dimensions extract the features. The result shows that the image recognition rate is improved after using the wavelet transform. After the image is processed by the wavelet, the unimproved 2DPCA algorithm does not use the redundant information between the eigenvectors. It is difficult to obtain the maximum projection value. The extracted information is not accurate enough. Therefore, this paper proposes an image recognition method based on 2DPCA combining wavelet theory and frame theory, which can fully consider the feature information and improve the recognition rate.

In summary, although the recognition rate of these improved algorithms are slightly higher than the classical 2DPCA face recognition algorithm, the recognition effect is still not very good for similar features. The analysis shows that none of these algorithms use the redundant information between the feature vectors and it is difficult to obtain the maximum projection value. Therefore, the extracted information is not accurate enough. This project is based on the wavelet decomposing and denoising, and adopts improved 2DPCA dimensionality reduction. This project expands the orthogonal principal element space into the (nonorthogonal) principal element space of the frame, so as to obtain more sufficient information in the frame principal element space. This algorithm is compared with other algorithms in the standard ORL face recognition database. The recognition rate and recognition time are compared through simulation experiments, so as to obtain the effective results of image recognition of two-dimensional principal component analysis combined with wavelet theory and frame theory.

2. Image Preprocessing Based on Feature Enhancement

For detecting and recognizing small-target images in the background of strong noise, directly processing the original image will affect the detection results. Therefore, the preprocessing of the image will help to extract the features of the image, and then improve the detection accuracy and recognition rate. In ORL face database, the image is affected by the similarity of features such as attitude, so the feature information can be enhanced by wavelet transform to improve the recognition rate.

This section reviews the idea of one-dimensional wavelet transform. Figure 1 introduces the basic idea of one-dimensional wavelet transform. After the image is processed by the wavelet, the image information is decomposed into many different spatial resolutions, frequency characteristics, and the characteristics of the direction of suband image signal. In this way, wavelet decomposition can provide good local information. And in each level of wavelet transform, image is divided into one low frequency information and three high frequencies information (respectively corresponding to hori-zontal, vertical and diagonal detail components) [15].

Given an image which is described as F, the result of first-order wavelet decomposition is shown in Figure 1. Among them, LL is the low frequency component of the image, which is a smooth image of the original image. HL represents the horizontal high frequency component of the image. LH represents the vertical high frequency component of the image, and HH represents the diagonal high frequency component of the image.

Let [F.sub.i] [member of] [R.sup.mxn] denote the training sample images, i = 1,2, ..., N, N denotes the training sample number. First, all the images are trained in order, then they pass the first-order wavelet decomposition, and the low-frequency component and the high- frequency ones are processed at the end. The low-frequency component and high-frequency one both denote the subb-and image after wavelet decomposition. In order to match them with the training sample images, they must be expanded by adding a zero matrix to them. So that make the matrices LL, HL, LH, and HH as follows:

[mathematical expression not reproducible] (1)

After the reconstruction of the wavelet, the image A is obtained by

A = [alpha]LL + [beta]HL + [beta]LH + [beta]HH (2)

Since the main energy of the noise component is generally concentrated on the detail component of the wavelet decomposition, the effect of the noise can be eliminated by neglecting the part of the high frequency component and the feature information can be enhanced by increasing the low frequency component, so the range of the low frequency coefficient [alpha] is 1.2-1.5, and the range of the high frequency coefficient [beta] is 0.9-1.0. If the values of [alpha] and [beta] exceed this range, the image will be distorted.

3. The 2DPCA Algorithm of the Frame Theory

3.1. The 2DPCA Algorithm. For a given image [F.sub.1], [F.sub.2],..., [F.sub.N], images [A.sub.1], [A.sub.2], ..., [A.sub.N] are obtained by a wavelet transformation of image [F.sub.1],[F.sub.2], ..., [F.sub.N]. [A.sub.i] (i = 1, ..., N) [member of] [R.sup.mxn], set X [member of] [R.sup.nx1]. Then the image is transformed by linear transformation:

[Y.sub.i] = [A.sub.i] X (3)

The image is directly projected onto X to get [Y.sub.i], which is called the projection feature vector of the image [A.sub.i]. And the optimal projection space X can be determined by the spread of eigenvector Y.

Let [S.sub.x] denote the covariance matrix of the training sample projection eigenvector [Y.sub.i] and tr ([S.sub.x]) represent the track of [S.sub.x]. Then when tr([S.sub.x]) takes the maximum value, its physical meaning is as follows: find a projection axis X with all the training projections on it, which maximizes the overall distribution matrix of the eigenvectors after the projection. tr([S.sub.x]) can be recorded as follows:

[mathematical expression not reproducible] (4)

By (4), we can get

tr ([S.sub.x]) = [X.sup.T] GX (5)

G = 1/N [N.summation over (i=1)] [([A.sub.i] - [bar.A]).sup.T] ([A.sub.i] - [bar.A]), G [member of] [R.sup.nxn] (6)

X is the orthogonal matrix composed of the feature vectors corresponding to the eigenvalues. Let the eigenvalue of the covariance matrix G be denoted as [[lambda].sub.i] (i = 1,2, ..., n), and [[lambda].sub.1] > [[lambda].sub.2] ... [greater than or equal to] [[lambda].sub.n], The corresponding feature vector is [u.sub.i] (i = 1, 2, ..., n), then U = [[u.sub.1], [u.sub.2], ..., [u.sub.n]]; therefore, the spectrum of matrix G is decomposed into the following:

G = U diag ([[lambda].sub.1], ..., [[lambda].sub.n])[U.sup.T] = [n.summation over (i=1)] [u.sub.i] [[lambda].sub.i] [u.sup.T.sub.i] (7)

Bring G into (5) formula to obtain

tr ([S.sub.x]) = [X.sup.T] [n.summation over (i=1)] [u.sub.i] [[lambda].sub.i] [u.sup.T.sub.i] X = [n.summation over (i=1)] [x.sup.T] [u.sup.i] [[lambda].sup.i] [u.sup.T.sub.i] x (8)

The feature subspace is constructed by the selected feature vector [u.sub.d] (i = 1,2, ..., d) corresponding to the previous d eigenvalue [[lambda].sub.d] (i = 1,2, ..., d). Then [U.sub.d] = [[u.sub.1], [u.sub.2],..., [u.sub.d]], d [less than or equal to] n.

[tr.sub.d] ([S.sub.x]) = [X.sup.T] [d.summation over (i=1)] [u.sub.i] [[lambda].sub.i] [u.sup.T.sub.i] X = [d.summation over (i=1)] [x.sup.T] [u.sup.i] [[lambda].sup.i] [u.sup.T.sub.i] x (9)

In this case, only when the eigenvalue [[lambda].sub.i] of the G takes the maximum, the corresponding eigenvector [u.sub.i] is the maximum value. The projection of eigenvector [u.sup.i] is the largest on X, so when the characteristic vector of X is the largest, it makes tr([S.sub.x]) the maximum value.

The physical meaning is that the overall dispersion degree of the eigenvector obtained by the image matrix after projection onto the space is the largest. The optimal projection space is the eigenvector corresponding to the largest eigenvalue of the overall image distribution matrix G, where the vector in the optimal projection space X here is a normalized normal orthogonal vector, which makes tr([S.sub.x]) maximize.

When the eigenvalues of G are arranged from large to small, and the orthogonal standard eigenvectors corresponding to the first d eigenvalues are as follows:

{[X.sub.1], ..., [X.sub.d]} = arg max tr ([S.sub.x])

(10) [X.sup.T.sub.i] [X.sub.j], i [not equal to] j, i,j= 1, ..., d

The feature matrix of the image: [X.sub.1] ... [X.sub.d] can be used to extract features, and a given image sample A is projected to [X.sub.k], such that

[Y.sub.k] = A[X.sub.k] (k = 1, 2, ..., d) (11)

So we can get a set of projection eigenvector [Y.sub.1], ..., [Y.sub.d], which is called the principal component vector of the image A. Then choose a suitable value d to form a matrix with dimensions mxi, which is called the feature image of the image A, that is,

Y = [[Y.sub.1],[Y.sub.2], ..., [Y.sub.d]] = A[[X.sub.1],[X.sub.2], ..., [X.sub.d]] (12)

Y is called the characteristic matrix or characteristic image of A.

3.2. The 2DPCA Algorithm of the Frame Theory. Aiming at the small target in the background of strong noise, for some features are similar or the extracted information is incomplete, the 2DPCA algorithm of the frame theory will make the extracted features more accurate.

The mathematical idea of this algorithm: after the cyclically preprocessed image, the frame theory proposed in this paper is adopted, which mainly deals with projection space when the feature is proposed. In the field of mathematics, the eigenvectors corresponding to different eigenvalues are orthogonal to each other, so the different eigenvectors are processed, so that for the d eigenvalues, there are d different orthogonal eigenvectors, and their arbitrarily combination can get the [2.sup.d] species, one of any combination can be used to make up the projector space to extract the image features. In this paper, one of the combinations is used to find the largest projection axis between two adjacent eigenvectors (this projection axis is not orthogonal to other eigenvectors). For d eigenvalues, the eigenvector is 2* d-1.

The specific algorithms for the 2DPCA of the frame theory are as follows: after 2DPCA's extracted feature is [X.sub.1],[X.sub.2], ..., [X.sub.d], then inserting a value between projection axes [X.sub.i] and [X.sub.j] (i,j = 1,2, ..., d), and so on, inserting a value between every two feature vectors. In this way, 2d combinations of feature vectors can be obtained, which can be used to extract image features. This project is to insert the mean value between the eigenvectors and get a series of nonstandard orthogonal basis vectors. For a given image A, the image is projected to the new projection space [X'.sub.k]:

[Y'.sub.k] = A[X'.sub.k] (k = 1,1.5,2, ..., d) (13)

In this way, we get a new set of projection eigenvectors of [Y.sub.1],[Y'.sub.1.5], [Y'.sub.2]..., [Y.sub.d], which also called the principal component vector of A. Then choose a suitable value d to form a matrix mxd, which called the feature image of the image A:

Y' = [[Y.sub.1], [Y'.sub.1.5], [Y'.sub.2] ..., [Y.sub.d]] = A [[X.sub.1], [X'.sub.1.5], [X'.sub.2] ..., [X.sub.d]] (14)

It is said that Y' is the feature image of the extracted image A under the 2DPCA algorithm of the standard frame theory.

Finally, the above feature images are used to recognition.

After the image samples have been wavelet transformed, they are subjected to the 2DPCA algorithm of the frame theory, and the characteristic matrix of each image is obtained. The nearest neighbor criterion is used for recognition. The characteristic matrix of training sample is [Y'.sub.i] = [[Y.sup.(i).sub.1], [Y.sup.(i).sub.2], ..., [Y.sup.(i).sub.d]] and the characteristic matrix of testing sample is [Y'.sub.j] = [[Y.sup.(j).sub.1], [Y.sup.(j).sub.2], ..., [Y.sup.(j).sub.d]] (i, j = 1,2, * * *, N).

The distance can be obtained by the following:

d([Y'.sub.i],[Y'.sub.j]) = [d.summation over (k=1)] [[parallel][Y.sup.(i).sub.k] - [Y.sup.(j).sub.k][parallel].sub.2] (15)

where [[parallel][Y.sup.(i).sub.k]- [Y.sup.(j).sub.k][parallel].sub.2] represents the Euclidean distance between [Y.sup.(i).sub.k] and [Y.sup.(j).sub.k]. The total number of samples is N, and finally the recognition is based on the nearest neighbor criterion.

4. Simulation Experiment

In this section, the simulation experiment is used to prove the effectiveness of the method proposed in this paper. The source of the experimental data set is the ORL face database [16] (created by the University of Cambridge's AT&T laboratory, contains 400 images of 40 faces. Some volunteers' images include changes in attitude, expression, and face ornaments. The face database is often used in the early stage of face recognition research)

4.1. Experimental Conditions. To verify the validity of image recognition by two-dimensional principal component analysis combined with wavelet theory and frame theory, this project is compared with the classical 2DPCA algorithm, the 2DPCA algorithm through wavelet transform and the 2DPCA algorithm based on the standard frame theory without wavelet processing. The object of this project is ORL face database [16]. The ORL face database has 40 people, each of whom has 10 different gestures and expressions, with a total of 400 images. The size of each face image is 112 x 92 pixels, and the gray level is 256. Face facial expressions (eyes or eyes closed, laughing, non-laughing) and facial modifications (wearing or not wearing glasses) on the ORL face database are all changed. Figure 2 is a sample taken from the first person on the ORL face database. On the ORL face database, the first 5 images of each person are selected as the training samples set, with 200 images in total. The last 5 images of each person are selected as the testing samples set, with 200 images in total. In this project, when the [alpha], [beta] of formula (2) are 1.5 and 1.0, respectively in the wavelet processing image, the reconstructed sample image can be obtained after the wavelet transform. In the 2DPCA algorithm based on the frame theory and the classical 2DPCA algorithm, the eigenvectors corresponding to the larger eigenvalues in the covariance matrix are selected as the best projection direction. Since the best projection axis affects the correct recognition rate of the human face, the 2DPCA algorithm, the wavelet decomposed 2DPCA algorithm, and the 2DPCA algorithm of the frame theory and the algorithm of this project are discussed in the experiment when the projection axis changes. These algorithms change the correct recognition rate and time on the ORL face database. The simulation results are shown in Figure 1 and Tables 1 and 2.

4.2. Results Analysis. It is noted in Table 1 that the correct recognition rate of the 2DPCA algorithm combined with the wavelet theory and the standard frame theory is the trend of the change of the projection axis on the ORL face database. It is noted in Table 2 that the recognition time of the 2DPCA algorithm combined with the wavelet theory and the frame theory is the trend of the change of the projection axis on the ORL face database. By contrast, other algorithms in this paper have been improved in recognition rate and reduced in time.

The algorithm flow chart of this project is as shown in Figure 3.

5. Conclusion

In this paper, a feature extraction method is proposed for the images with similar feature in the strong noise background. Image preprocessing based on feature enhancement is first applied to the image, which makes the image unaffected by other noise factors. Then, 2DPCA algorithm based on frame theory is proposed to extract the feature of human face. The experimental results show that the algorithm in the project not only improves the face recognition rate, but also shortens the time used for recognition. The algorithm of this project applies the frame theory to deal with the eigenvectors corresponding to the eigenvalues, and it can use the redundant information of the image to extract the feature information more effectively when identifying the similar image features.

Data Availability

(1) The source of the experimental data set is the ORL face database (created by the University of Cambridge's AT&T laboratory, contains 400 images of 40 faces. Some volunteers' images include changes in attitude, expression and face ornaments. The face database is often used in the early stage of face recognition research). The data set used by this laboratory may be less, but the ultimate goal of this experiment is to verify: the image recognition and classification based on the two-dimensional principal component analysis combined with the wavelet theory and the frame theory. In the next research experiment, we will carry out multiple experiments and expand them along the different data sets collected by ourselves. (2) First of all, each image is processed by wavelet threshold denoising. This is the image preprocessing that is required for applying different data sets. This will help to extract the features of the image, and then improve the detection accuracy and classification recognition rate. (3) After the cyclically preprocessed image, the frame theory proposed in this paper is adopted, which mainly deals with projection space when the feature is proposed. In the field of mathematics, the eigenvectors corresponding to different eigenvalues are orthogonal to each other, so the different eigenvectors are processed, so that for the d eigenvalues, there are d different orthogonal eigenvectors, and their arbitrarily combination can get the 2d species, one of any combination can be used to make up the projector space to extract the image features. In this experiment, one of the combinations is used to find a largest projection axis between two adjacent eigenvectors (this projection axis is not orthogonal to other eigenvectors). For d eigenvalues, the eigenvector is 2*d-1. ORL Database of Faces (ORL): http://www.cl.cam.ac.uk/research/ dtg/attarchive/facedatabase.html.

https://doi.org/10.1155/2018/9061796

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant nos. 61605026, U1504616, and 61503123, the Program for Science & Technology Innovation Talents in Universities of Henan Province under Grant 17HASTIT021, Basic Research Project of Henan Education Department under Grant 13A510188, and the Scientific Research Foundation of Henan University of Technology under grant no. 2015RCJH14.

References

[1] J. Yang, D. Zhang, and A. F. Frangi, "Two-dimensional PCA: a new approach to apperence -based face representation and recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131-137, 2004.

[2] H. Guo-hui and G. Jun-ying, "Application study for 2DPCA in face recognition," Computer Engineering and Design, vol. 27, no. 24, pp. 4667-4673, 2006.

[3] D. Zhang and Z. Zhou, "Tow-Directional Tow-Dimensional PCA for Efficient Face Representation and Recognition," Neurocomputing, vol. 69, no. 3, pp. 224-231,2005.

[4] T. Ziyou, "Liang Jing. Face Recognition Based on Linear Transformation Theory," Journal of Jishou University, vol. 32, no. 3, pp. 55-58, 2011.

[5] L. Wang, X. Wang, X. Zhang, and J. Feng, "The equivalence of two-dimensional PCA to line-based PCA," Pattern Recognition Letters, vol. 26, no. 1, pp. 57-60,2005.

[6] L. Defu and H. Xin, "Face Recognition system based on two-dimensional PCA and SVM algorithm," Journal of Guilin University of Electronic Technology, vol. 37, no. 5, pp. 391-395, 2017

[7] F. Fei, J. Baohua, L. Peixue, and C. Yujie, "Application of Improved 2DPCA Algorithm in Face Recognition," Computer Science, vol. 44, no. S2, pp. 267-268+311,2017

[8] Y. Xueyi, W. Daan, H. Tianshu, X. Jingwen, and G. Yafeng, "Computer Engineering and Application," Novel 2D-PCA face recognition based on tensor, vol. 53, no. 6, pp. 1-6, 2017

[9] X. D. Li and S. M. Fei, "New face recognition method based on improved modular 2DPCA," Journal of System Simulation, vol. 21, no. 15, pp. 4672-4675, 2009.

[10] L.-W. Wang, X. Wang, M. Chang, and J.-F. Feng, "Is two-dimensional PCA a new technique," Acta Automatica Sinica, vol. 31, no. 5, pp. 782-787, 2005.

[11] M.-H. Yang, "Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods," in Proceedings of the 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002, pp. 215-220, IEEE, Washington, DC, USA, May 2002.

[12] S. Li, D. Gong, and Y. Yuan, "Face recognition using Weber local descriptors," Neurocomputing, vol. 122, pp. 272-283, 2013.

[13] L. Jingping, "A M-2DPCA Face Recognition Method," Journal of Changchun Normal University, vol. 33, no. 02, pp. 40-44, 2014.

[14] G. Junying and L. Chunzhi, "Face Recognition Based on Wavelet Transform, Two-Dimensional Principal Component Analysis and Independent Component Analysis," Pattern Recognition and Artificial Intelligence, vol. 20, no. 03, pp. 377-381,2007

[15] S. G. Mallat, "A theory of multi-resolution signal decomposition: The wavelet representation," IEEE-PAMI, vol. 11, no. 7, pp. 674-693, 1989.

[16] http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase .html.

Pingping Tao, (1) Xiaoliang Feng, (1) and Chenglin Wen (1,2)

(1) College of Electrical Engineering, Henan University of Technology, Zhengzhou, China

(2) School of Automation, Hangzhou Dianzi University, Hangzhou, China

Correspondence should be addressed to Chenglin Wen; wencl@hdu.edu.cn

Received 19 April 2018; Revised 6 August 2018; Accepted 27 August 2018; Published 19 September 2018

Academic Editor: Daniel Morinigo-Sotelo

Caption: FIGURE 1: First order wavelet decomposition.

Caption: FIGURE 2: Sample of ORL face image.

Caption: FIGURE 3

TABLE 1: The comparison of the recognition rate between the 2DPCA algorithm and this algorithm under different principal components in the ORL Library (%). Principal component P=6 P=8 P=10 2DPCA 89.9 90.2 91.7 WT-2DPCA 90.1 90.3 91.9 The frame theory without wavelet transform 2DPCA 90.1 90.6 92.1 The algorithm in this paper 92.3 93.1 93.9 Principal component P=20 P=45 2DPCA 92.5 93.8 WT-2DPCA 92.6 93.9 The frame theory without wavelet transform 2DPCA 92.9 94.2 The algorithm in this paper 94.1 94.8 TABLE 2: The comparison of the recognition time between the 2DPCA algorithm and this algorithm under different principal components in the ORL Library (s). principal component P=6 P=8 P=10 P=20 P=45 2DPCA 1.8722 2.1367 2.5691 3.2425 3.8973 WT-2DPCA 1.7879 2.1109 2.3078 3.1213 3.3677 The frame theory without 1.5456 2.0139 2.1214 2.8476 3.1911 wavelet transform 2DPCA The algorithm in this paper 1.5153 2.0031 2.1128 2.7659 2.9098

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Tao, Pingping; Feng, Xiaoliang; Wen, Chenglin |

Publication: | Journal of Control Science and Engineering |

Date: | Jan 1, 2018 |

Words: | 4540 |

Previous Article: | Robust High-Gain Observers Based Liquid Levels and Leakage Flow Rate Estimation. |

Next Article: | A Practical Model for Inbound Container Distribution Organization in Rail-Water Transhipping Terminal. |

Topics: |