# Two classes of almost unbiased type principal component estimators in linear regression model.

1. Introduction Consider the following multiple linear regression model: y = X[beta] + [epsilon], (1) where y is an n x 1 vector of responses, X is an n x p known design matrix of rank p, [beta] is a p x 1 vector of unknown parameters, [epsilon] is an n x 1 vector of disturbances assumed to be distributed with mean vector 0 and variance covariance matrix [[sigma].sup.2][I.sub.n], and [I.sub.n] is an identity matrix of order n.

According to the Gauss-Markov theorem, the ordinary least squares estimate (OLSE) of (1) is obtained as follows:

[??] = [(X' X).sup.-1] X' y. (2)

It has been treated as the best estimator for a long time. However, many results have proved that the OLSE is no longer a good estimator when the multicollinearity is present. To overcome this problem, many new biased estimators have been proposed, such as principal components regression estimator (PCRE) [1], ridge estimator [2], Liu estimator [3], almost unbiased ridge estimator [4], and the almost unbiased Liu estimator [5].

To hope that the combination of two different estimators might inherit the advantages of both estimators, Kaciranlar et al. [6] improved Liu's approach and introduced the restricted Liu estimator. Akdeniz and Erol [7] compared some biased estimators in linear regression in the mean squared error matrix (MSEM) sense. By combining the mixed estimator and Liu estimator, Hubert and Wijekoon [8] obtained the two-parameter estimator which is a general estimator including the OLSE, ridge estimator, and Liu estimator. Baye and Parker [9] proposed the r - k class estimator which includes as special cases the PCRE, the RE, and the OLSE. Then, Kaciranlar and Sakallioglu [10] proposed the r - d estimator which is a generalization of the OLSE, PCRE, and Liu estimator. Based on the r-k estimator and r-d estimator, Xu and Yang [11] considered the restricted r - k estimator and restricted r - d estimator and Wu and Yang [12] introduced the stochastic restricted r - k estimator and the stochastic restricted r - d estimator, respectively.

The primary aim in this paper is to introduce two new classes of estimators where one includes the OLSE, PCRE, and AURE as special cases and the other one includes the OLSE, PCRE, and AULE as special cases and provide some alternative methods to overcome multicollinearity in linear regression.

The paper is organized as follows. In Section 2, the new estimators are introduced. In Section 3, some properties of the new estimator are discussed. Then we give a Monte Carlo simulation in Section 4. Finally, some conclusions are given in Section 5.

2. The New Estimators

In the linear model given by (1), the almost unbiased ridge estimator (AURE) proposed by Singh et al. [4] and the almost unbiased Liu estimator (AULE) proposed by Akdeniz and Kaciranlar [5] are defined as

[[??].sub.AU](k) = [(I - [k.sup.2](S + kI).sup.-2])[??], (3)

[[??].sub.AULE(d)] = (I - [(1 - d).sup.2][(S + I).sup.-2])[??], (4)

respectively, where k > 0, 0 < d < 1, S = X' X.

Now consider the spectral decomposition of the matrix given as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

where [[LAMBDA].sub.r] = diag([[lambda].sub.1], ..., [[lambda].sub.r)], [[LAMBDA].sub.p-r] = diag([[lambda].sub.r+1], ..., [[lambda].sub.p-r]) and [[lambda].sub.1] [greater than or equal to] [[lambda].sub.2] [greater than or equal to] ... [greater than or equal to] [[lambda].sub.p] > 0 are the ordered eigenvalues of S. The matrix T = [([T.sub.r]: [T.sub.p-r]).sub.pxp] is orthogonal with [T.sub.r] = ([t.sub.1], ..., [t.sub.r]) consisting of its first r columns and [T.sub.p-r] = ([t.sub.r+1], ..., [t.sub.p]) consisting of the remaining p - r columns of the matrix T. Then [T'.sub.r][ST.sub.r] = [[LAMBDA].sub.r]; the PCRE of [beta] can be written as

[[??].sub.r] = [T.sub.r][([T'.sub.r][ST.sub.r]).sup.-1][T.sub.r]X' y = [T.sub.r][[LAMBDA].sup.-1.sub.r][T.sub.r]X' y. (6)

The r - k class estimator proposed by Baye and Parker [9] and the r - d class estimator proposed by Kaciranlar and Sakallioglu [10] are defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

Followed by Xu and Yang [11], the r - k class estimator and r - d class estimator can be rewritten as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

where [??](k) = T[([LAMBDA] + kI).sup.-1]T'X' y = [(S + kI).sup.-1] X' y is the ridge estimator by Hoerl and Kennard [2] and [??](d) = T[([LAMBDA] + I).sup.-1](I + d[[LAMBDA].sup.-1])T'X' y = [(S +I).sup.-1] (I + d[S.sup.-1])X' y is the

Liu estimator proposed by Liu [3].

Now, we are to propose two new estimator classes by combining the PCRE with the AURE and AULE, that is, the almost unbiased ridge principal components estimator (AURPCE) and the almost unbiased Liu estimator principal component estimator (AULPCE), as follows:

[[??].sub.AU](r,k) = [T.sub.r][T'.sub.r](I - [k.sup.2][(S + kI).sup.-2]) [??] = [T.sub.r][T'.sub.r][G.sub.k][??], (9)

[[??].sub.AU](r,d) = [T.sub.r][T'.sub.r](I - [(1 - d).sup.2][(S + I).sup.-2] [??] = [T.sub.r][T'.sub.r][H.sub.d][??], (10)

respectively, where [G.sub.k] = I - [k.sup.2][(S + kI).sup.-2], [H.sub.d] = I - [(1 - d).sup.2] [(S + I).sup.-2].

From the definition of the AURPCE, we can easily obtain the following.

If r = p, then [[??].sub.AU](r, k) = [[??].sub.AU](k), AURE.

If k = 0, r = p, then [[??].sub.SRAU](r, k) = [??], OLSE.

If k = 0, then [[??].sub.AU](r, k) = [??](r) = [T.sub.r][T'.sub.r][??], PCRE.

From the definition of the SRAULPCE, we can similarly obtain the following.

If r = p, then [[??].sub.AU](r, d) = [[??].sub.AU](d), AULE.

If d = 0, r = p, then [[??].sub.AU](r, d) = [??], OLSE.

If d = 0, then [[??].sub.AU](r, d) = [T.sub.r][T'.sub.r][??], PCRE.

So the [[??].sub.AU](r,k) could be regarded as a generalization of PCRE, OLSE, and AURE, while [[??].sub.AU](r, d) could be regarded as a generalization of PCRE, OLSE, and AULE.

Furthermore, we can compute that the bias, dispersion matrix, and mean squared error matrix of the new estimators [[??].sub.AU](r,k) are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

MSEM ([[??].sub.AU](r,k)) = [[sigma].sup.2][T.sub.r][T'.sub.r][G.sub.k] [S.sup.-1][G'.sub.k][T.sub.r][T'.sub.r] + ([T.sub.r][T'.sub.r][G.sub.k] - I)[beta][beta]'([T.sub.r][T'.sub.r][G.sub.k] - I)', (12) respectively.

In a similar way, we can get the MSEM of the [[??].sub.AU](r, d) as follows:

MSEM ([[??].sub.AU](r, d)) = [[sigma].sup.2][T.sub.r][T'.sub.r][H.sub.d] [S.sup.-1][H'.sub.d][T.sub.r][T'.sub.r] + ([T.sub.r][T'.sub.r][H.sub.d] -I)[beta][beta]'([T.sub.r][T'.sub.r][h.sub.d] - I)' (13)

In particular, if we let r = p in (12) and (13), then we can get the MSEM of the AURE and AULE as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

3. Superiority of the Proposed Estimators

For the sake of convenience, we first list some notations, definitions, and lemmas needed in the following discussion. For a matrix M, M', [M.sup.+], rank(M), R{M), and N(M) stand for the transpose, Moore-Penrose inverse, rank, column space, and null space, respectively. M [greater than or equal to] 0 means that M is nonnegative definite and symmetric.

Lemma 1. Let [C.sub.nxp] be the set of n x p complex matrices, let [H.sub.nxn] be the subset of [C.sub.nxp] consisting of Hermitian matrices, and L [member of] [C.sub.nxp], [L.sup.*], M(L), and J(D) stand for the conjugate transpose, the range, and the set of all generalized inverses, respectively. Let D [member of] [H.sub.nxn], [a.sub.1] and [a.sub.2] [member of] [C.sub.nx1] be linearly independent, [f.sub.ij] = [a.sup.*.sub.i] [D.sup.-] [a.sub.j], i, j = 1, 2, and if [a.sub.2] [not member of] M(D), let s = [[a.sup.*.sub.1][(I - D[D.sup.-]).sup.*] (I - D[D.sup.-])[a.sub.2]]/[[a.sup.*.sub.1][(I - D[D.sup.-]).sup.*](I - D[D.sup.-])[a.sub.1]].

Then D + [a.sub.1][a.sup.*.sub.1] - [a.sub.2][a.sup.*.sub.2] [greater than or equal to] 0 if and only if one of the following sets of conditions holds:

(a) D [greater than or equal to] > 0, [a.sub.i] [member of] M(D), i = 1, 2, ([f.sub.11] + 1)([f.sub.22] - 1) [less than or equal to] [[absolute value of [f.sub.12].sup.2];

(b) D [greater than or equal to] 0, [a.sub.1] [not member of] M(D), [a.sub.2] [member of] M(D[??][a.sub.1]), [([a.sub.2] - [sa.sub.1]).sup.*][D.sup.-] ([a.sub.2] - [sa.sub.1]) [less than or equal to] 1 - [[absolute value of s].sup.2];

(c) D = U[DELTA][U.sup.*]-[lambda]v[v.sup.*], [a.sub.i] [member of] M(D), i = 1, 2, [v.sup.*] [a.sub.1] [not equal to] 0, [f.sub.11] + 1 [less than or equal to] 0, [f.sub.22] - 1 [less than or equal to] 0, ([f.sub.11] + 1)([f.sub.22] - 1) [less than or equal to] [[absolute value of [f.sub.12].sup.2],

where (U[??]v) is a subunitary matrix (U possibly absent), [DELTA] a positive-definite diagonal matrix (occurring when U is present), and X a positive scalar. Further, all expressions in (a), (b), and (c) are independent of the choice of [D.sup.-] [member of] J(D).

Proof. Lemma 1 is due to Baksalary and Trenkler [13].

Let us consider the comparison between the AURPCE and AURE and the AULPCE and AULE, respectively. From (12)-(14), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

where [D.sub.1] = [[sigma].sup.2]([G.sub.k][S.sup.-1] [G'.sub.k] - [T.sub.r][T'.sub.r][G.sub.k] [S.sup.-1][G'.sub.k][T.sub.r][T'.sub.r]), [D.sub.2] = [[sigma].sup.2] ([H.sub.d][S.sup.-1][H'.sub.d] - [T.sub.r][T'.sub.r][H.sub.d] [S.sup.-1][H'.sub.d][T.sub.r][T'.sub.r]) and [b.sub.1] = ([G.sub.k] - I)[beta], [b.sub.2] = ([T.sub.r][T'.sub.r][G.sub.k] - I) [beta], [b.sub.3] = ([H.sub.d] - I)[beta], [b.sub.2] = ([T.sub.r][T'.sub.r][H.sub.d] - I)[beta].

Now, we will use Lemma 1 to discuss the differences [[DELTA].sub.1] and [[DELTA].sub.2] following Sarkar [14] and Xu and Yang [11]. Since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

we assume that [T'.sub.r][S.sup.-1][T.sub.p-r] = 0 and [T'.sub.p-r] [S.sup.-1][T.sub.p-r] is invertible; then

[S.sup.-1] = [T.sub.r][T'.sub.r][S.sup.-1][T.sub.r][T'.sub.r] + [T.sub.p-r][T'.sub.p-r][S.sup.-1][T.sub.p-r][T'.sub.p-r]. (17)

Meanwhile, it is noted that the assumptions are reasonable which is equivalent to the partitioned matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], that is, a block diagonal matrix and the second main diagonal being invertible.

Theorem 2. Suppose that [T'.sub.r][S.sup.-1][T.sub.p-r] = 0 and [T'.sub.p-r][S.sup.-1][T.sub.p-r] is invertible; then the AURPCE is superior to the AURE if and only if [beta] [member of] N(F), where F = [[sigma].sup.-1] [([T'.sub.p-r][S.sup.-1][T.sub.p-r]).sup.-1/2] [T'.sub.p-r].

Proof. Since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)

then we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (19)

And the Moore-Penrose inverse [D.sup.+.sub.1] of [D.sub.1] is

[D.sup.+.sub.1] = [[sigma].sup.-2][T.sub.p-r](I - [k.sup.2][([[LAMBDA].sub.p-r] + kI).sup.-2])([T'.sub.p-r][S.sup.-1][T.sub.p-r] x (I - [k.sup.2] [([[LAMBDA].sub.p-r] + kI).sup.-2])[T'.sub.p-r]. (20)

Note that [D.sub.1][D.sup.+.sub.1] = [T.sub.p-r][T'.sub.p-r] = I - [T.sub.r][T'.sub.r], I - [k.sup.2][([[LAMBDA].sub.p-r] + kI).sup.-2], is a positive definition matrix since [[LAMBDA].sub.p-r] supposed to be invertible and [D.sub.1][D.sup.+.sub.1][a.sub.1] [not equal to] [a.sub.1], so [a.sub.1] [not member of] M(D). Moreover,

[b.sub.2] - [b.sub.1] = -[T.sub.p-r][I - [k.sup.2][([[LAMBDA].sub.p-r] + kI).sup.-2])[T'.sub.p-r][beta] = [D.sub.1][[eta].sub.1], (21)

where [[eta].sub.1] = -[[sigma].sup.-2][T.sub.p-r] [(I -[k.sup.2][([[LAMBDA].sub.p-r] + kI).sup.-2]).sup.-1] [([T'.sub.p-r][S.sup.- 1][T.sub.p-r]).sup.-1][T'.sub.p-r][beta]. This implies that [b.sub.2] [member of] M([D.sub.1][??][b.sub.1]): So the conditions of part (b) in Lemma 1 can be employed. Since (I - [DD.sup.-])' (I - [DD.sup.-]) = [T.sub.r][T'.sub.r][T.sub.r][T'.sub.r] = [T.sub.r] [T'.sub.r] and [T'.sub.r][b.sub.2] = [T'.sub.r][b.sub.1], it is concluded that s = 1 in our case. Thus, it follows from Lemma 1 that the [[??].sub.AU](r, k) is superior to [[??].sub.AU](k) in the MSEM sense if and only if ([b.sub.2] - [b.sub.1])'[D.sup.-.sub.1]([b.sub.2] - [b.sub.1]) = [[eta]'[D'.sub.1][D.sup.-.sub.1][D.sub.1][[eta].sub.n] = [[eta]'.sub.1][D.sub.1][[eta].sub.1] [less than or equal to] 0.

Observing that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (22)

where F = [[sigma].sup.-1][([T'.sub.p-r][S.sup.-1][T.sub.p-r]).sup.-1/2] [T'.sub.p-r], thus the necessary and sufficient condition turns out to be [beta] [member of] N(F).

Theorem 3. Suppose that [T'.sub.r][S.sup.-1][T.sub.p-r] = 0 and [T'.sub.r][S.sup.-1][T.sub.p-r] is invertible; then the new estimator AULPCE is superior to the AULE if and only if [beta] [member of] N(F), where F = [[sigma].sup.-1] [([T'.sub.r][S.sup.- 1][T.sub.p-r].sup.-1/2][T'.sub.p-r].

Proof. In order to apply Lemma 1, we can similarly compute that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (23)

Therefore, the Moore-Penrose inverse [D.sup.+.sub.2] of [D.sub.2] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (24)

Since = [D.sub.2][D.sup.+.sub.2] = [T.sub.p-r][T'.sub.p-r] then [b.sub.3] [not member of] M(D). Moreover,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (25)

where [[eta].sub.1] = -[[sigma].sup.-2][T.sub.p-r][(I - [(1 - d).sup.2][([[LAMBDA].sub.p-r] + I).sup.-2]).sup.-1] [([T'.sub.p-r][S.sup.-1][T.sub.p-r]).sup.-1][T'.sub.p-r][beta]. This implies that [b.sub.4] [member of] m([D.sub.2][??][b.sub.3]). So the conditions of part (b) in Lemma 1 can be employed. Since (I - [DD.sup.-])'(I - [DD.sup.-]) = [T.sub.r][T'.sub.r][T.sub.r][T'.sub.r] = [T.sub.r][T'.sub.r] and [T'.sub.r][b.sub.4] = [T'.sub.r][b.sub.3], it is concluded that s = 1 in our case. Thus, it follows from Lemma 1 that the [[??].sub.AU](r, d) is superior to [[??].sub.AU](d) in the MSEM sense if and only if ([b.sub.4] - [b.sub.3])'[D.sup.- .sub.2]([b.sub.4] - [b.sub.4]) = [[eta]'.sub.2][D'.sub.2][[D.sup.-.sub.2][D.sub.2][[eta].sub.2] = [[eta]'.sub.2] [[D'.sub.2][[eta].sub.2] [less than or equal to] 0: Observing that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (26)

where F = [[sigma].sub.-1][([T'.sub.p-r][S.sup.-1][T.sub.p-r]).sup.-1/2] [T'.sub.p-r], thus the necessary and sufficient condition turns out to be [beta] [member of] N(F).

4. Monte Carlo Simulation

In order to illustrate the behaviour of the AURPCE and AULPCE, we perform a Monte Carlo simulation study. Following the way of Li and Yang [15], the explanatory variables and the observations on the dependent variable are generated by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (27)

where [[omega].sub.ij] are independent standard normal pseudorandom numbers and [gamma] is specified so that the correlation between any two explanatory variables is given by [[gamma].sub.2]. In this experiment, we choose r = 2 and [[sigma].sup.2] = 1. Let us consider the AURPCE, AULPCE, AURE, AULE, PCRE, and OLSE and compute their respective estimated MSE values with the different levels of multicollinearity, namely, [gamma] = 0.7, 0.85, 0.9, 0.999 to show the weakly, strong, and severely collinear relationships between the explanatory variables (see Tables 1 and 2). Furthermore, for the convenience of comparison, we plot the estimated MSE values of the estimators when [gamma] = 0.999 in Figure 1.

From the simulation results shown in Tables 1 and 2 and the estimated MSE values of these estimators, we can see that for most cases, the AURPCE and AULPCE have smaller estimated MSE values than those of the AURE, AULE, PCRE, and OLSE, respectively, which agree with our theoretical findings. From Figure 1, the AURPCE and AULPCE also have more stable and smaller estimated MSE values. We can see that our estimator is meaningful in practice.

5. Conclusion

In this paper, we introduce two classes of new biased estimators to provide an alternative method of dealing with multicollinearity in the linear model. We also show that our new estimators are superior to the competitors in the MSEM criterion under some conditions. Finally, a Monte Carlo simulation study is given to illustrate the better performance of the proposed estimators.

http://dx.doi.org/10.1155/2014/639070

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (no. 11201505) and the Fundamental Research Funds for the Central Universities (no. 0208005205012):

References

[1] W: F: Massy, "Principal components regression in exploratory statistical research," The Journal of the American Statistical Association, vol. 60, no. 309, pp. 234-266, 1965.

[2] A: E: Hoerl and R. W. Kennard, "Ridge regression: biased estimation for nonorthogonal problems," Technometrics, vol. 42, no. 1, pp. 80-86, 2000:

[3] K: J: Liu, "A new class of biased estimate in linear regression," Communications in Statistics-Theory and Methods, vol. 22, no. 2, pp. 393-402, 1993.

[4] B: Singh, Y. R Chaubey, and T D. Dwivedi, "An almost unbiased ridge estimator," Sankhya. The Indian Journal of Statistics. Series B, vol: 48, no. 3, pp. 342-346, 1986.

[5] F: Akdeniz and S. Kafiranlar, "On the almost unbiased generalized Liu estimator and unbiased estimation of the bias and MSE," Communications in Statistics-Theory and Methods, vol. 24, no. 7, pp. 1789-1797, 1995.

[6] S: Kaciranlar, S. Sakallioglu, F. Akdeniz, G. P. H. Styan, and H. J. Werner, "A new biased estimator in linear regression and a detailed analysis of the widely-analysed dataset on Portland cement," Sankhya. The Indian Journal of Statistics. Series B, vol. 61, pp. 443-459, 1999.

[7] F: Akdeniz and H. Erol, "Mean squared error matrix comparisons of some biased estimators in linear regression," Communications in Statistics-Theory and Methods, vol. 32, no. 12, pp. 2389-2413, 2003.

[8] M: H: Hubert and P. Wijekoon, "Improvement of the Liu estimator in linear regression model," Statistical Papers, vol. 47, no. 3, pp. 471-479, 2006.

[9] M: R: Baye and D. F. Parker, "Combining ridge and principal component regression: a money demand illustration," Communications in Statistics-Theory and Methods, vol. 13, no. 2, pp. 197-205, 1984

[10] S: Kaciranlar and S. Sakallioglu, "Combining the Liu estimator and the principal component regression estimator," Communications in Statistics-Theory and Methods, vol. 30, no. 12, pp. 2699-2705, 2001.

[11] J: Xu and H. Yang, "On the restricted r-k class estimator and the restricted r-d class estimator in linear regression," Journal of Statistical Computation and Simulation, vol. 81, no. 6, pp. 679-691, 2011.

[12] J: B: Wu and H. Yang, "On the stochastic restricted almost unbiased estimators in linear regression model," Communications in Statistics-Simulation and Computation, vol. 43, pp. 428-440, 2014.

[13] J: K: Baksalary and G. Trenkler, "Nonnegative and positive definiteness of matrices modified by two matrices of rank one," Linear Algebra and Its Applications, vol. 151, pp. 169-184, 1991.

[14] N: Sarkar, "Mean square error matrix comparison of some estimators in linear regressions with multicollinearity," Statistics & Probability Letters, vol. 30, no. 2, pp. 133-138, 1996.

[15] Y: Li and H. Yang, "A new stochastic mixed ridge estimator in linear regression model," Statistical Papers, vol. 51, no. 2, pp. 315-323, 2010.

Yalian Li and Hu Yang

Department of Statistics and Actuarial Science, Chongqing University, Chongqing 401331, China

Correspondence should be addressed to Yalian Li; yaliancn@gmail.com

Received 15 January 2014; Accepted 8 March 2014; Published 2 April 2014

```
TABLE 1: MSE values of the OLSE, PCRE, AURE, and AURPCE.

k        0.00       0.10      0.30      0.40              0.50

[gamma] = 0.7
OLSE     0.0619     0.0619    0.0619    0.0619            0.0619
PCRE     0.0285     0.0285    0.0285    0.0285            0.0285
AURE     0.0619     0.0619    0.0619    0.0619            0.0619
AURPCE   0.0285     0.0285    0.0285    0.0285            0.0285

[gamma] = 0.85
OLSE     0.1085     0.1085    0.1085    0.1085            0.1085
PCRE     0.0384     0.0384    0.0384    0.0384            0.0384
AURE     0.1085     0.1085    0.1085    0.1085            0.1085
AURPCE   0.0384     0.0384    0.0384    0.0383            0.0383

[gamma] = 0.99
OLSE     1.4636     1.4636    1.4636    1.4636            1.4636
PCRE     0.3522     0.3522    0.3522    0.3522            0.3522
AURE     1.4636     1.4565    1.4116    1.3797            1.3441
AURPCE   0.3522     0.3515    0.3464    0.3426            0.3381

[gamma] = 0.999
OLSE     14.5437   14.5437   14.5437   14.5437           14.5437
PCRE      3.3903    3.3903    3.3903    3.3903            3.3903
AURE     14.5437    1.4399    6.0117    4.5727            3.5858
AURPCE    3.3903    2.9735    1.8963    1.5285            1.2518

k        0.80       0.90      1.00

OLSE     0.0619     0.0619    0.0619
PCRE     0.0285     0.0285    0.0285
AURE     0.0619     0.0619    0.0618
AURPCE   0.0285     0.0285    0.0285

OLSE     0.1085     0.1085    0.1085
PCRE     0.0384     0.0384    0.0384
AURE     0.1084     0.1084    0.1083
AURPCE   0.0383     0.0383    0.0383

OLSE     1.4636     1.4636    1.4636
PCRE     0.3522     0.3522    0.3522
AURE     1.2281     1.1889    1.1502
AURPCE   0.3220     0.3161    0.3101

OLSE     14.5437   14.5437   14.5437
PCRE     3.3903     3.3903    3.3903
AURE     1.9800     1.6797    1.4430
AURPCE   0.7514     0.6496    0.5673

TABLE 2: MSE values of the OLSE, PCRE, AULE, and AULPCE.

d        0.00       0.10      0.20      0.40

[gamma] = 0.7
OLSE     0.0709     0.0709    0.0709    0.0709
PCRE     0.0303     0.0303    0.0303    0.0303
AULE     0.0709     0.0709    0.0709    0.0709
AULPCE   0.0303     0.0303    0.0303    0.0303

[gamma] = 0.85
OLSE     0.1085     0.1085    0.1085    0.1085
PCRE     0.0384     0.0384    0.0384    0.0384
AULE     0.1083     0.1083    0.1084    0.1084
AULPCE   0.0383     0.0383    0.0383    0.0383

[gamma] = 0.99
OLSE     1.4636     1.4636    1.4636    1.4636
PCRE     0.3522     0.3522    0.3522    0.3522
AULE     1.1502     1.2066    1.2583    1.3461
AULPCE   0.3101     0.3179    0.3249    0.3367

[gamma] = 0.999
OLSE     14.5437   14.5437   14.5437   14.5437
PCRE     3.3903     3.3903    3.3903    3.3903
AULE     1.4430     2.8578    4.5509    8.2191
AULPCE   0.5673     0.9193    1.3076    2.0980

d          0.50      0.70      0.90      1.00

OLSE      0.0709    0.0709    0.0709    0.0709
PCRE      0.0303    0.0303    0.0303    0.0303
AULE      0.0709    0.0709    0.0709    0.0709
AULPCE    0.0303    0.0303    0.0303    0.0303

OLSE      0.1085    0.1085    0.1085    0.1085
PCRE      0.0384    0.0384    0.0384    0.0384
AULE      0.1085    0.1085    0.1085    0.1085
AULPCE    0.0383    0.0384    0.0384    0.0384

OLSE      1.4636    1.4636    1.4636    1.4636
PCRE      0.3522    0.3522    0.3522    0.3522
AULE      1.3814    1.4337    1.4603    1.4636
AULPCE    0.3414    1.3483    0.3518    0.3522

OLSE     14.5437   14.5437   14.5437   14.5437
PCRE      3.3903    3.3903    3.3903    3.3903
AULE      9.9597   12.7929   14.3436   14.5437
AULPCE    2.4599    3.0381    3.3502    3.3903
```
No portion of this article can be reproduced without the express written permission from the copyright holder.

Title Annotation: Printer friendly Cite/link Email Feedback Research Article Li, Yalian; Yang, Hu Journal of Applied Mathematics Report Jan 1, 2014 4294 Three-step epipolar-based visual servoing for nonholonomic robot with FOV constraint. The application of pattern recognition in electrofacies analysis. Linear models (Statistics) Linear regression models Mathematical research Principal components analysis