# Stochastic Restricted Biased Estimators in Misspecified Regression Model with Incomplete Prior Information.

1. IntroductionMisspecification due to left out relevant explanatory variables is very often when considering the linear regression model, which causes these variables to become a part of the error term. Consequently, the expected value of error term of the model will not be zero. Also, the omitted variables may be correlated with the variables in the model. Therefore, one or more assumptions of the linear regression model will be violated when the model is misspecified, and hence the estimators become biased and inconsistent. Further, it is well-known that the ordinary least squares estimator (OLSE) may not be very reliable if multicollinearity exists in the linear regression model. As a remedial measure to solve multicollinearity problem, biased estimators based on the sample model y = X[beta] + [epsilon] with prior information which can be exact or stochastic restrictions have received much attention in the statistical literature. The intention of this work is to examine the performance of the recently introduced stochastic restricted biased estimators in the misspecified regression model with incomplete prior knowledge about regression coefficients when there exists multicollinearity among explanatory variables.

When we consider the biased estimation in misspecified regression model without any restrictions on regression parameters, Sarkar [1] discussed the consequences of exclusion of some important explanatory variables from a linear regression model when multicollinearity exists. Siray [2] and Wu [3] examined the efficiency of the r-d class estimator and r-k class estimator over some existing estimators, respectively, in the misspecified regression model. Chandra and Tyagi [4] studied the effect of misspecification due to the omission of relevant variables on the dominance of the r-(k, d) class estimator. Recently, Kayanan and Wijekoon [5] examined the performance of existing biased estimators and the respective predictors based on the sample information in a misspecified linear regression model without considering any prior information about regression coefficients.

It is recognized that the mixed regression estimator (MRE) introduced by Theil and Goldberger [6] outperforms ordinary least squares estimator (OLSE) when the regression model is correctly specified. The biased estimation with stochastic linear restrictions in the misspecified regression model due to inclusion of an irrelevant variable with the incorrectly specified prior information was discussed by Terasvirta [7]. Later Mittelhammer [8], Ohtani and Honda [9], Kadiyala [10], and Trenkler and Wijekoon [11] discussed the efficiency of MRE under misspecified regression model due to exclusion of a relevant variable with correctly specified prior information. Further, the superiority of MRE over the OLSE under the misspecified regression model with incorrectly specified sample and prior information was discussed by Wijekoon and Trenkler [12]. Hubert and Wijekoon [13] have considered the improvement of Liu estimator (LE) under a misspecified regression model with stochastic restrictions and introduced the Stochastic Restricted Liu Estimator (SRLE).

In this paper, the performance of the recently introduced stochastic restricted estimators, namely, the Stochastic Restricted Ridge Estimator (SRRE) proposed by Li and Yang [14], Stochastic Restricted Almost Unbiased Ridge Estimator (SRAURE), and Stochastic Restricted Almost Unbiased Liu Estimator (SRAULE) proposedbyWu and Yang [15], Stochastic Restricted Principal Component Regression Estimator (SRPCRE) proposed by He and Wu [16], Stochastic Restricted r-k (SRrk) class estimator, and Stochastic Restricted r-d (SRrd) class estimator proposed by Wu [17], was examined in the misspecified regression model when multicollinearity exists among explanatory variables. Further, a generalized form to represent these estimators is also proposed.

The rest of this article is organized as follows. The model specification and the estimators are written in Section 2. In Section 3, the mean square error matrix (MSEM) comparison between two estimators and respective predictors is considered. In Section 4, a numerical example and a Monte Carlo simulation study are given to illustrate the theoretical results in Scalar Mean Square Error (SMSE) criterion. Finally, some concluding remarks are mentioned in Section 5. The references and appendixes are given at the end of the paper.

2. Model Specification and the Estimators

Assume that the true regression model is given by

y = [X.sub.1][[beta].sub.1] + [X.sub.2][[beta].sub.2] + [epsilon] = [X.sub.1][[beta].sub.1] + [delta] + [epsilon], (1)

where y is the n x 1 vector of observations on the dependent variable, [X.sub.1] and [X.sub.2] are the n x l and n x p matrices of observations on the m = l + p regressors, [[beta].sub.1] and [[beta].sub.2] are the l x 1 and p x 1 vectors of unknown coefficients, and [epsilon] is the n x 1 vector of disturbances such that E([epsilon]) = 0 and E([epsilon][epsilon]') = [OMEGA] = [[sigma].sup.2]I.

Let us say that the researcher misspecifies the regression model by excluding p regressors as

y = [X.sub.1][[beta].sub.1] + u. (2)

Let us also assume that there exists prior information on [[beta].sub.1] in the form of

r = R[[beta].sub.1] + g + v, (3)

where r is the q x 1 vector, R is the given q x l matrix with rank q, g is the q x 1 unknown fixed vector, v is the q x 1 vector of disturbances such that E(v) = 0, D(v) = E(vv') = [PSI] = [[sigma].sup.2]W, where W is positive definite, and E(vu') = 0

By combining sample model (2) and prior information (3), Theil and Goldberger [6] proposed the mixed regression estimator (MRE) as

[mathematical expression not reproducible]. (4)

To combat multicollinearity, several researchers introduce different types of stochastic restricted estimators in place of MRE. Seven such estimators are SRRE, SRAURE, SRLE, SRALUE, SRPCRE, SRrk class estimator, and SRrd class estimator defined below, respectively:

[mathematical expression not reproducible], (5)

where k > 0, 0 < d < 1, and [T.sub.h] = ([t.sub.1], [t.sub.2], ..., [t.sub.h]) are the first h columns of T = ([t.sub.1], [t.sub.2], ..., [t.sub.h], ..., [t.sub.l]) which is an orthogonal matrix of the standardized eigenvectors of [X'.sub.1][X.sub.1].

According to Kadiyala [10], now we apply the simultaneous decomposition to the two symmetric matrices [X'.sub.1][X.sub.1] and R'[[PSI].sup.-1]R, as

B'[X'.sub.1][X.sub.1]B = I, B'R'[[PSI].sup.-1]RB = [LAMBDA], (6)

where [X'.sub.1][X.sub.1] is a positive definite matrix and R'[[PSI].sup.-1]R is a positive semidefinite matrix, B is a l x l nonsingular matrix, and [LAMBDA] is a l x l diagonal matrix with eigenvalues [[lambda].sub.i] > 0 for i = 1, 2, ..., q and [[lambda].sub.i] = 0 for i = q + 1, ..., l.

Let [X.sub.*] = [X.sub.1]B, [R.sub.*] = RB, [gamma] = [B.sup.-1][[beta].sub.1], [X'.sub.*][X.sub.*], = I, and [R'.sub.*][[PSI].sup.-1][R.sub.*] = [LAMBDA]; then the models (1), (2), and (3) can be written as

y = [X.sub.*][gamma] + [delta] + [epsilon], (7)

y = [X.sub.*][gamma] + u, (8)

r = [R.sub.*][gamma] + g + v. (9)

According to Wijekoon and Trenkler [12], the corresponding MRE is given by

[mathematical expression not reproducible]. (10)

Hence, the respective expectation vector, bias vector, and dispersion matrix are given by

[mathematical expression not reproducible]. (11)

In the case of misspecification, now the SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd for model (7) can be written as

[mathematical expression not reproducible], (12)

respectively, where [C.sub.k] = [(1 + k).sup.-1], [C.sup.*.sub.k] = (1 + 2k)[([C.sub.k]).sup.2], [C.sub.d] = [2.sup.-1](1 + d), [C.sup.*.sub.d] = [2.sup.-1](3 - d)[C.sub.d], [C.sub.h] = [T.sub.r][T'.sub.r], [C.sub.hk] = [C.sub.k][C.sub.h], and [C.sub.hd] = [C.sub.d][C.sub.h].

It is clear that [C.sub.k], [C.sup.*.sub.k], [C.sub.d], and [C.sup.*.sub.d] are positive definite and [C.sub.h], [C.sub.hk], and [C.sub.hd] are nonnegative definite.

Since all these estimators can be written by incorporating [[??].sub.MRE], now we write a generalized form to represent SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd as given below:

[[??].sub.(j)] = [G.sub.(j)][[??].sub.MRE], (13)

where [G.sub.(j)] is positive definite matrix if it stands for [C.sub.k], [C.sup.*.sub.k], [C.sub.d], and [C.sup.*.sub.d], and it is nonnegative definite matrix if it stands for [C.sub.h], [C.sub.hk], and [C.sub.hd].

Now the expectation vector, bias vector, the dispersion matrix, and the mean square error matrix can be written as

[mathematical expression not reproducible], (14)

where [tau] = [(I + [[sigma].sup.2][LAMBDA]).sup.-1] and A = ([X'.sub.*][delta] + [R'.sub.*][W.sup.-1]g).

Based on (14), the respective bias vector, dispersion matrix, and MSEM of the MRE, SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd can easily be obtained and are given in Table B1 in Appendix B.

By using the approach of Kadiyala [10] and (3) and (4), the generalized prediction function can be defined as follows:

[y.sub.0] = [X.sub.*][gamma] + [delta]

[[??].sub.(j)] = [X.sub.*][[??].sub.(j)], (15)

where [y.sub.0] is the actual value and [[??].sub.(j)] is the corresponding predictor.

The MSEM of the generalized predictor is given by

[mathematical expression not reproducible]. (16)

Note that the predictors based on the MRE, SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd are denoted by [mathematical expression not reproducible], and [[??].sub.SRrd], respectively.

3. Mean Square Error Matrix (MSEM) Comparisons

If two generalized biased estimators [[??].sub.(i)] and [[??].sub.(j)] are given, the estimator [[??].sub.(j)] is said to be superior to [[??].sub.(i)] with respect to MSEM sense if and only if MSEM([[??].sub.(i)]) - MSEM([[??].sub.(j)]) [greater than or equal to] 0. Also, if two generalized predictors [[??].sub.(i)] and [[??].sub.(j)] are given, the predictor [[??].sub.(j)] is said to be superior to [[??].sub.(i)] with respect to MSEM sense if and only if MSEM([[??].sub.(i)]) - MSEM([[??].sub.(j)]) [greater than or equal to] 0.

Now let [mathematical expression not reproducible], and [[DELTA].sub.(i,j)] = MSEM([[??].sub.(i)]) - MSEM([[??].sub.(j)]) = [D.sub.(i,j)] + [b.sub.(i)][b'.sub.(i)] - [b.sub.(j)][b'.sub.(j)].

By applying Lemma A1 (see Appendix A), the following theorem can be stated for the superiority of [[??].sub.(j)] over [[??].sub.(i)] with respect to the MSEM criterion.

Theorem 1. If [D.sub.(i,j)] is positive definite, then [[??].sub.(j)] is superior to [[??].sub.(i)] in MSEM sense when the regression model is misspecified due to excluding relevant variables if and only if

[b'.sub.(j)][([D.sub.(j)] + [b.sub.(i)][b'.sub.(i)]).sup.-1][b.sub.(j)] [less than or equal to] 1. (17)

Proof. Let [D.sub.(i,j)] be a positive definite matrix. According to Lemma A1 (see Appendix A), [[DELTA].sub.(i,j)] is nonnegative definite matrix if [b'.sub.(j)][([D.sub.(i,j)] + [b.sub.(i)][b'.sub.(i)]).sup.- 1][b.sub.(j)] [less than or equal to] 1. This completes the proof.

The following theorem can be stated for the superiority of [[??].sub.(j)] over [[??].sub.(i)] with respect to the MSEM criterion.

Theorem 2. If A [greater than or equal to] 0, [[??].sub.(j)] is superior to [[??].sub.(i)] in MSEM sense when the regression model is misspecified due to excluding relevant variables if and only if [theta] [member of] [Real part](A) and [theta]' [A.sup.-1] [theta] [less than or equal to] 1, where A = [X.sub.*][[DELTA].sub.(i,j)])[X'.sub.*] + [X.sub.*]([b.sub.(i)] - [b.sub.(j)])([b.sub.(i)] - [b.sub.(j)])'[X'.sub.*] + [delta][delta]', [theta] = [delta] + [X.sub.*]([b.sub.(i)] - [b.sub.(j)]), and [Real part](A) stands for column space of A and [A.sup.-1] is an independent choice of g-inverse of A.

Proof. According to (16), we can write MSEM([[??].sub.(i)]) - MSEM([[??].sub.(j)]) as

[mathematical expression not reproducible]. (18)

After some straight forward calculation, it can be written as

MSEM ([[??].sub.(i)]) - MSEM ([[??].sub.(j)]) = A - [theta][theta]', (19)

where A = [X.sub.*]([[DELTA].sub.(i,j)]) + ([b.sub.(i)] - [b.sub.(j)])([b.sub.(i)] - [b.sub.(j)])')[X'.sub.*] + [delta][delta]', and [theta] = [delta] + [X.sub.*]([b.sub.(i)] - [b.sub.(j)]).

Due to Lemma A3 (see Appendix A), MSEM([[??].sub.(i)]) - MSEM([[??].sub.(j)]) is nonnegative definite matrix if and only if A [greater than or equal to] 0, [theta] [member of] [Real part](A) and [theta]'[A.sup.-1][theta] [less than or equal to] 1, where [Real part](A) stands for column space of A and [A.sup.-1] is an independent choice of g-inverse of A. This completes the proof.

Based on Theorems 1 and 2, we can define Corollaries C1-C28, written in Appendix C, for the superiority conditions between two selected estimators and for the respective predictors by substituting the relevant expressions for [mathematical expression not reproducible], and D([[??].sub.(j)]) given in Table B1 in Appendix B.

4. Illustration of Theoretical Results

4.1. Numerical Example. To illustrate the theoretical results, we considered the dataset which gives the total National Research and Development Expenditures as a Percent of Gross National Product by Country from 1972 to 1986. The dependent variable Y of this dataset is the percentage spent by the United States, and the four other independent variables are [X.sub.1], [X.sub.2], [X.sub.3], and [X.sub.4]. The variable [X.sub.1] represents the percent spent by the former Soviet Union, [X.sub.2] that spent by France, [X.sub.3] that spent by West Germany, and [X.sub.4] that spent by the Japan. The data has been analysed by Gruber [18], Akdeniz and Erol [19], and Li and Yang [14], among others. Now we assemble the data as follows:

[mathematical expression not reproducible]. (20)

Note that the eigenvalues of X'X are 302.96, 0.728, 0.044, and 0.035, the condition number is 93, and the Variance Inflation Factor (VIF) values are 6.91, 21.58, 29.75, and 1.79. This implies the existence of serious multicollinearity in the dataset.

The corresponding OLS estimator of [beta] is [??] = [(X'X).sup.-1]X'y = (0.645, 0.089, 0.143, 0.152) and the estimate of [[sigma].sup.2] is [[??].sup.2] = 0.00153. In this example, we consider R = (1, -2, -2, -2) and g = c(1, -1, 2, 0). The SMSE values of the estimators are summarized in Tables B2-B3 in Appendix B.

Table B2 shows the estimated SMSE values of MRE, SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd for the regression model when (l, p) = (4, 0), (l, p) = (3, 1), and (l, p) = (2, 2) with respect to shrinkage parameters (k/d), where I denotes the number of variables in the model and p denotes the number ofmisspecified variables. Table B3 shows the estimated SMSE values of the predictor of MRE, SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd for the regression model when (l, p) = (4, 0), (l, p) = (3, 1), and (l, p) = (2, 2) for some selected shrinkage parameters (k/d).

Note that when (l, p) = (4, 0) the model is correctly specified, when (l, p) = (3, 1) one variable is omitted from the model, and when (l, p) = (2, 2) two variables are omitted from the model. For simplicity, we choose shrinkage parameter values k and d in the range (0, 1).

From Table B2, we can observe that the MRE is superior to the other estimators when (l, p) = (4, 0) and SRAULE, SRRE, SRLE, and SRAURE outperform the other estimators for (k/d) < 0.2, 0.2 [less than or equal to] (k/d) < 0.5, 0.5 [less than or equal to] (k/d) < 0.7, and (k/d) [greater than or equal to] 0.7, respectively, when (I, p) = (3,1). Similarly, SRLE and SRRE are superior to the other estimators for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (2, 2).

From Table B3, we further observe that predictors based on SRLE and SRRE outperform the other predictors for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (4, 0) and (l, p) = (3, 1), and predictors based on SRrd and SRrk are superior to the other predictors for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (2, 2).

4.2. Simulation. For further clarification, a Monte Carlo simulation study is done at different levels of misspecification using R 3.2.5. Following McDonald and Galarneau [20], we can generate the explanatory variables as follows:

[x.sub.ij] = [(1 - [[rho].sup.2]).sup.1/2] [z.sub.ij] + [rho][z.sub.i,m];

i = 1, 2, ..., n. j = 1, 2, ..., m, (21)

where [z.sub.ij] is an independent standard normal pseudorandom number and [rho] is specified so that the theoretical correlation between any two explanatory variables is given by [[rho].sup.2]. A dependent variable is generated by using the following equation:

[y.sub.i] = [[beta].sub.1][x.sub.i1] + [[beta].sub.2][x.sub.i2] + [[beta].sub.3][x.sub.i3] + [[beta].sub.4][x.sub.i4] + [[beta].sub.5][x.sub.i5] + [[epsilon].sub.i];

i = 1, 2, ..., n, (22)

where [[epsilon].sub.i] is a normal pseudorandom number with mean zero and variance one. Also, we select [beta] = ([[beta].sub.1], [[beta].sub.2], [[beta].sub.3], [[beta].sub.4], [[beta].sub.5]) as the normalized eigenvector corresponding to the largest eigenvalue of X'X for which [beta]'[beta] = 1. Further we choose R = (1, 1, 1, 1, 1) and g = (1, -2, 0, 3, 1).

Then the following setup is considered to investigate the effects of different degrees of multicollinearity on the estimators:

(i) [rho] = 0.9, condition number = 9.49, and VIF = (5.99, 5.88, 5.94, 5.96, 20.47).

(ii) [rho] = 0.99, condition number = 34.77, and VIF = (57.66, 56.50, 57.26, 57.31, 225.06).

(iii) [rho] = 0.999, condition number = 115.66, and VIF = (574.3, 562.8, 570.7, 570.8, 2271.4).

Three different sets of observations are considered by selecting (l, p) = (5, 0), (l, p) = (4, 1), and (l, p) = (3, 2) when n = 50, where I denotes the number of variables in the model and p denotes the number of misspecified variables. Note that when (l, p) = (5, 0) the model is correctly specified, when (l, p) = (4, 1) one variable is omitted from the model, and when (l, p) = (3, 2) two variables are omitted from the model. For simplicity, we select values k and d in the range (0, 1).

The simulation is repeated 2000 times by generating new pseudorandom numbers and the simulated SMSE values of the estimators and predictors are obtained using the following equations:

[mathematical expression not reproducible]. (23)

The simulation results are summarized in Tables B4-B9 in Appendix B.

Tables B4, B5, and B6 show the estimated SMSE values of the estimators for the regression model when (l, p) = (5, 0), (l, p) = (4, 1), and (l, p) = (3, 2) and [rho] = 0.9, [rho] = 0.99, and 0.999 for the selected values of shrinkage parameters (k/d), respectively. Tables B7, B8, and B9 show the corresponding estimated SMSE values of the predictors for the above regression models, respectively.

From Table B4, we can observe that MRE and SRAULE outperform the other estimators for (k/d) < 0.8 and (k/d) > 0.8, respectively, when (l, p) = (5, 0) and (l, p) = (4, 1). Further, SRLE and SRRE are superior to the other estimators for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (3, 2) under [rho] = 0.9.

From Table B5, we can observe that SRAULE, MRE, and SRAURE outperform the other estimators for (k/d) < 0.3, 0.3 [less than or equal to] (k/d) < 0.7, and (k/d) [greater than or equal to] 0.7, respectively, when (l, p) = (5, 0). Similarly, SRAULE, SRRE, SRLE, and SRAURE are superior to the other estimators when (k/d) < 0.2, 0.2 [less than or equal to] (k/d) < 0.5, 0.5 [less than or equal to] (k/d) < 0.7, and (k/d) [greater than or equal to] 0.7, respectively, when (l, p) = (4, 1), and both SRLE and SRRE outperform the other estimators for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (3, 2) and [rho] = 0.99.

The results in Table B6 indicate that MRE is superior to the other estimators when (l, p) = (5, 0), and SRAULE, SRRE, SRLE, and SRAURE outperform the other estimators for (k/d) < 0.2, 0.2 [less than or equal to] (k/d) < 0.5, 0.5 [less than or equal to] (k/d) < 0.7, and (k/d) [greater than or equal to] 0.7, respectively, when (I, p) = (4,1). Further, SRLE and SRRE outperform the other estimators for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (3, 2) and [rho] = 0.999.

From Tables B7-B9, we further observe that the predictors based on SRrd and SRrk always outperform the other predictors for (k/d) < 0.5 and (k/d) [greater than or equal to] 0.5, respectively, when (l, p) = (5,0), (l, p) = (4, 1), and (l, p) = (3,2).

The SMSE values of the selected estimators are plotted with different [rho] values to demonstrate the results graphically when (l, p) = (3, 2). Figures 1-3 show the graphical illustration of the performance of estimators in the misspecified regression model ((l, p) = (3, 2)) when [rho] = 0.9, [rho] = 0.99, and [rho] = 0.999, respectively. Similarly, Figures 4-6 present the graphical illustration of the performance of predictors in the misspecified regression model ((l, p) = (3, 2)) when [rho] = 0.9, [rho] = 0.99, and [rho] = 0.999, respectively.

5. Conclusion

Theorems 1 and 2 give the common form of superiority conditions to compare the estimators (MRE, SRRE, SRAURE, SRLE, SRAULE, SRPCRE, SRrk, and SRrd) and their respective predictors in MSEM criterion in the misspecified linear regression model when the prior information of the regression coefficients is incomplete, and the multicollinearity exists among the explanatory variables.

From the simulation study, the superior estimators and predictors over the others when the conditions are different can be identified. The results obtained in this research will produce significant improvements in the parameter estimation in misspecified regression models with incomplete prior information, and the results are applicable to real-world applications.

https://doi.org/10.1155/2018/1452181

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Supplementary Materials

Supplementary material contains three appendix sections named Appendices A, B, and C. Appendix A: Lemmas A1-A3, used to prove the theorems. Appendix B: Tables B1-B9, which show the stochastic properties of estimators and results of numerical example and simulation study. Appendix C: Corollaries C1-C28. (Supplementary Materials)

References

[1] N. Sarkar, "Comparisons among some estimators in misspecified linear models with multicollinearity," Annals of the Institute of Statistical Mathematics, vol. 41, no. 4, pp. 717-724, 1989.

[2] G. Siray, "r-d class estimator under misspecification," Communications in Statistics--Theory and Methods, vol. 44, no. 22, pp. 4742-4756, 2015.

[3] J. Wu, "Superiority of the r-k class estimator over some estimators in a misspecified linear model," Communications in Statistics--Theory and Methods, vol. 45, no. 5, pp. 1453-1458, 2016.

[4] S. Chandra and G. Tyagi, "On the performance of some biased estimators in a misspecified model with correlated regressors," in STATISTICS IN TRANSITION new series, pp. 27-52, 2017.

[5] M. Kayanan and P. Wijekoon, "Performance of Existing Biased Estimators and the respective Predictors in a Misspecified Linear Regression Model," Open Journal of Statistics, pp. 876-900, 2017.

[6] H. Theil and A. S. Goldberger, "On Pure and Mixed Statistical Estimation in Economics," International Economic Review, vol. 2, no. 1, pp. 65-78, 1961.

[7] T. Terasvirta, Linear restrtctions in misspecified linear models and polynomial distributed lag estimation, vol. 5, Departntent of Statistics University of Helsinki, Helsinki, Finland, 1980.

[8] R. C. Mittelhammer, "On specification error in the general linear model and weak mean square error superiority of the mixed estimator," Communications in Statistics--Theory and Methods, vol. 10, no. 2, pp. 167-176, 1981.

[9] K. Ohtani and Y. Honda, "On small sample properties of the mixed regression predictor under misspecification," Communications in Statistics - Theory and Methods, pp. 2817-2825, 1984.

[10] K. Kadiyala, "Mixed regression estimator under misspecification," Economics Letters, vol. 21, no. 1, pp. 27-30, 1986.

[11] G. Trenkler and P. Wijekoon, "Mean square error matrix superiority of the mixed regression estimator under misspecification," Statistica, vol. 49, no. 1, pp. 65-71, 1989.

[12] P. Wijekoon and G. Trenkler, "Mean Square Error Matrix Superiority of Estimators under Linear Restrictions and Misspecification," Economics Letters, vol. 30, pp. 141-149, 1989.

[13] M. Hubert and P. Wijekoon, "Superiority of the stochastic restricted Liu estimator under misspecification," Statistica, vol. 64, no. 1, pp. 153-162, 2004.

[14] Y. Li and H. Yang, "A new stochastic mixed ridge estimator in linear regression model," Statistical Papers, pp. 315-323, 2010.

[15] J. Wu and H. Yang, "On the stochastic restricted almost unbiased estimators in linear regression model," Communications in Statistics--Simulation and Computation, vol. 43, no. 2, pp. 428-440, 2014.

[16] D. He and Y. Wu, "A Stochastic Restricted Principal Components Regression Estimator in the Linear Model," The Scientific World Journal, vol. 2014, Article ID 231506, 6 pages, 2014.

[17] J. Wu, "On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in Linear Regression Model," Journal of Applied Mathematics, vol. 2014, Article ID 173836, 6 pages, 2014.

[18] M. H. Gruber, Improving Efficiency by Shrinkage, vol. 156 of Statistics: Textbooks and Monographs, Marcel Dekker, New York, NY, USA, 1998.

[19] F. Akdeniz and H. Erol, "Mean Squared Error Matrix Comparisons of Some Biased Estimators in Linear Regression," Communications in Statistics--Theory and Methods, pp. 2389-2413, 2003.

[20] G. C. McDonald and D. I. Galarneau, "A monte carlo evaluation of some ridge-type estimators," Journal of the American Statistical Association, vol. 70, no. 350, pp. 407-416, 1975.

Manickavasagar Kayanan (ID) (1,2) and Pushpakanthie Wijekoon (3)

(1) Department of Physical Science, Vavuniya Campus, University of Jaffna, Vavuniya, Sri Lanka

(2) Postgraduate Institute of Science, University of Peradeniya, Peradeniya, Sri Lanka

(3) Department of Statistics and Computer Science, University of Peradeniya, Peradeniya, Sri Lanka

Correspondence should be addressed to Manickavasagar Kayanan; mgayanan@vau.jfn.ac.lk

Received 17 December 2017; Accepted 1 March 2018; Published 10 April 2018

Academic Editor: Aera Thavaneswaran

Caption: Figure 1: SMSE values of the estimators in the misspecified regression model ((l, p) = (3, 2)) when n = 50 and [rho] = 0.9.

Caption: Figure 2: SMSE values of the estimators in the misspecified regression model when n = 50 and [rho] = 0.99.

Caption: Figure 3: SMSE values of the estimators in the misspecified regression model when n = 50 and [rho] = 0.999.

Caption: Figure 4: SMSE values of the predictors in the misspecified regression model when n = 50 and [rho] = 0.9.

Caption: Figure 5: SMSE values of the predictors in the misspecified regression model when n = 50 and [rho] = 0.99.

Caption: Figure 6: SMSE values of the predictors in the misspecified regression model when n = 50 and [rho] = 0.999.

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Kayanan, Manickavasagar; Wijekoon, Pushpakanthie |

Publication: | Journal of Probability and Statistics |

Date: | Jan 1, 2018 |

Words: | 4606 |

Previous Article: | Similarity Statistics for Clusterability Analysis with the Application of Cell Formation Problem. |

Next Article: | Mixed Effects Models with Censored Covariates, with Applications in HIV/AIDS Studies. |

Topics: |