# Empirical mode decomposition combined with local linear quantile regression for automatic boundary correction.

1. Introduction

We consider the following general nonparametric regression model:

y = f(x) + [[epsilon].sub.i] ..., (1)

where Y is the response variable, x is a covariate, f(x) = E(y | x) is assumed to be a smooth nonparametric function, and [[epsilon].sub.i] represents independent and identical random errors with mean 0 and variance [[sigma].sup.2].

Empirical mode decomposition (EMD) is a form of analysis based on nonparametric methods . This technique is particularly useful for analyzing nonlinear and nonstationary time series. This method has been widely applied over the last few years to analyze data in different disciplines, such as biology, finance, engineering, and climatology. EMD can enhance estimation performance. Applying the capabilities of EMD as a fully adaptive method and its advantages of handling nonlinear and nonstationary signal behaviors leads to better results. However, EMD suffers from boundary extension, curve fitting, and stopping criteria . Such problems may corrupt the entire data and result in a misleading conclusion . Given that finite data are involved, the algorithms must be adjusted to use certain boundary conditions. In EMD, the end points are also considered problems. The influence of the end points propagates into the data range during sifting. Data extension (or data prediction) is a risky procedure for linear and stationary processes and is more difficult for nonlinear and nonstationary processes. The work in  indicated that only the values and locations of the next several extrema, and not all extended data, need to be predicted for EMD. Widely used approaches, such as the characteristic wave extending method, mirror extending method , data extending method , data reconstruction method , and similarity searching method , were proposed to overcome the problem and generate a more reasonable solution. The work in  introduced quantile regression, a significant extension of traditional parametric and nonparametric regression methods. Quantile regression has been largely used in statistics since its introduction because of its ease of interpretation, robustness, and numerous applications in important areas, such as medicine, economics, environment modeling, toxicology, and engineering [9,10]. A robust version of classical local linear regression (LLR) known as local linear quantile regression (LLQ) by [11,12] respectively, have increasingly drawn interest. With its robust behavior, LLQ exhibits excellent boundary adjustment. This characteristic can more efficiently distinguish systematic differences in dispersion, tail behavior, and other features with respect to covariates [12,13].

The current study aims to use the advantages of LLQ to automatically reduce the boundary effects of EMD instead of using classical boundary solutions mentioned previously. The proposed method consists of two stages that automatically decrease the boundary effects of EMD. At the first stage, LLQ is applied to the corrupted and noisy data. The remaining series is then expected to be hidden in the residuals. At the second stage, EMD is applied to the residuals. The final estimate is the summation of the fitting estimates from LLQ and EMD. Compared with EMD, this combination obtains more accurate estimates.

The remainder of this study is organized as follows. In Section 2, we present a brief background of EMD and LLQ. Section 3 introduces the proposed method. Section 4 compares the results of the original EMD algorithm and the proposed new boundary adjustment by simulation experiments. Conclusions are drawn in Section 5.

2. Background

2.1. History of Boundary Treatment in Nonparametric Estimators. Most nonparametric techniques such as kernel regression, wavelet thresholding, and empirical mode decomposition show a sharp increase in variance and bias at points near the boundary. Lots of works have been reported in the literature in order to reduce the effects of boundary problem. For kernel regression solution, see [14,15]. For wavelet thresholding, in addition to use of periodic or symmetric assumption, the authors in [16,17] used polynomial regression to improve the boundary problem. For empirical mode decomposition the authors in  provided a new idea about the boundary extension instead of using the traditional mirror extension on the boundary, and they proposed a ratio extension on boundary. The authors in  applied neural network to each IMF to restrain the end effect. The work in  provided an algorithm based on the sigma-pi neural network which is used to extend signals before applying EMD. The authors in  proposed a new approach that couples the mirror expansion with the extrapolation prediction of regression function to solve boundary problem. The algorithm includes two steps: the extrapolation of the signal through support vector (SV) regression at both endpoints to form the primary expansion signal, and then the primary signal is further expanded through extrema mirror expansion and EMD is performed on the resulting signal to obtain reduced end limitations.

In this paper we have followed 16] and 17] strategies to handle end effects of boundary problem in EMD. Instead of using classical polynomial nonparametric regression we will replace it by using a more robust nonparametric estimator, called local linear quantile regression. Practical justifications for choosing such estimator will be explained in Section 2.4.

2.2. Empirical Mode Decomposition (EMD). EMD  has proven to be a natural extension and an alternative technique to traditional methods for analyzing nonlinear and nonstationary signals, such as wavelet methods, Fourier methods, and empirical orthogonal functions . In this section, we briefly describe the EMD algorithm. The main objective of EMD is to decompose the data [y.sub.t] into small signals called intrinsic mode functions (IMF). An IMF is a function in which the upper and the lower envelopes are symmetric; in addition, the number of zero-crossings and the number of extremes are equal or differ by at most one . The algorithm for extracting IMFs for a given time series [y.sub.t] is called shifting and consists of the following steps.

(I) Setting initial estimates for the residue as [r.sub.0](t) = [y.sub.t], [g.sub.0](t) = [r.sub.k-1](t), i = 1, and the index of IMF as k = 1.

(II) Constructing the lower minima [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the upper [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] envelopes of the signal by the cubic spline method.

(III) Computing the mean values, m;, by averaging the upper envelope and the lower envelope as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

(IV) Subtracting the mean from the original signal, that is, [g.sub.i] = [g.sub.i-1] - [m.sub.i-1] and i = i+1. Steps II to IV are repeated until [g.sub.i] becomes an IMF. If so, the kth IMF is given by [IMF.sub.K] = [g.sub.i].

(V) Updating the residue as [r.sub.k] = (t) = [r.sub.k-1](n) - [IMF.sub.K]. This residual component is treated as new data and subjected to the process described above to calculate the next [IMF.sub.K+1].

(VI) Repeating the steps above until the final residual component r(x) becomes a monotonic function and then considering the final estimation of residue [??](x).

Many methods have been presented to extract trends from a time series. Freehand and least squares methods are the commonly used techniques; the former depends on the experience of users, and the latter is difficult to use when the original series are very irregular . EMD is another effective method for extracting trends .

2.3. Local Linear Quantile Regression (LLQ). The seminal study by  introduced the parametric quantile regression, which can be considered an alternative to classical regression in both parametric and nonparametric fields. Many models for the nonparametric approach, including locally polynomial quantile regression by  and kernel methods by , have since been introduced into the statistical literature. In this paper we adopt local linear regression (LLQ) introduced by .

Let {([x.sub.i], [y.sub.i]), i = 1, ..., n} be bivariate observations. To estimate the rth conditional quantile function of response y, the equation below is defined given X = x:

g (x) = [Q.sub.y] ([tau] | x). (2)

Let K be a positive symmetric unimodal kernel function and consider the following weighted quantile regression problem:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

where [w.sub.i](x) = k(([x.sub.i] - x)/h)/h. Once the covariate observations are centered at point x, the estimate of g(x) is simply [[??].sub.0], which is the first component of the minimizer of (2). [[??].sub.1] determines an estimate of the slope of the function g at point x.

The higher-order LLQ estimate is the minimizer of the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

The choice of the bandwidth parameter h significantly influences all nonparametric estimations. An excessively large h obscures too much local structure by excessive smoothing. Conversely, an excessively small h introduces too much variability by relying on very few observations in the local polynomial fitting .

2.4. Bandwidth Selection. The practical performance of [[??].sub.[alpha]](x) depends strongly on selected of bandwidth parameter. In this study we adopt the strategy of . In summary, we have the automatic bandwidth selection strategy for smoothing conditional quantiles as follows.

(1) Use ready-made and sophisticated methods to select [h.sub.mean]; we use the technique of .

(2) Use [h.sub.[tau]] = [h.sub.mean] [{[tau] (1 - [tau])/[phi][([[PHI].sup.-1] ([tau])).sup.2]}.sup.1/5] to obtain all other [h.sub.[tau]]s from [h.sub.mean].

Here, [phi] and [PHI] are standard normal density and distribution function and [h.sub.mean] is a bandwidth parameter for regression mean estimation with various existing methods. As it can be seen, this procedure leads to identical bandwidth for [tau] and (1 - [tau]) quantiles.

2.5. The Behavior of Local Linear Quantile Estimator at Boundary Region. To examine the asymptotic the asymptotic behavior of the local linear quantile estimators at the boundaries, we offer this theorem which has been discussed in detail; see . Here we omitted the proofs and summarized the key points. Without loss of generality, one can consider only the left boundary point [u.sub.0] = ch, 0 < c < 1, if [U.sub.t] takes value only from [0,1]. However, a similar result holds for the right boundary point [u.sub.0] = 1 - ch.

Define

[u.sub.j,c] = [[integral].sup.1.sub.-c] [u.sup.j]K (u) du, [v.sub.j,c] = [[integral].sup.1.sub.-c] [u.sup.j][K.sup.2] (u) du. (5)

Theorem 1 (see ). Consider the following assumptions.

(1) a(u) is twice continuously differentiable in a neighborhood of [u.sub.0] for any [u.sub.0].

(2) [f.sub.u](u) is continuous and [f.sub.u]([u.sub.0]) > 0.

(3) [f.sub.y|u,x] (y) is bounded and satisfies the Lipschitz condition.

(4) The kernel function K(-) is symmetric and has a compact support, say [-1,1].

(5) {([X.sub.t], [Y.sub.t], [U.sub.t])} is a strictly [alpha]-mixing stationary process with mixing coefficient which satisfies [[SIGMA].sup.[infinity].sub.t[greater than or equal to]1] [[alpha].sup.([delta]-2)/[delta]] (t) < [infinity] for some positive real number [delta] > 2 and l > ([delta] - 2)/[delta].

(6) E[parallel][X.sub.t][[parallel].sup.2[delta]] < [infinity] with [delta]* > [delta].

(7) [OMEGA]([u.sub.0]) is positive-definite and continuous in a neighborhood of [u.sub.0].

(8) [OMEGA]*([u.sub.0]) is continuous and positive-definite in a neighborhood of [u.sub.0].

(9) The bandwidth h satisfies h [right arrow] 0 and h [right arrow] [infinity].

(10) f(u, v | [x.sub.0], [x.sub.s];s) [less than or equal to] M < [infinity] for s [greater than or equal to] 1 where f(u, v | [x.sub.0], [x.sub.s]; s) is the conditional density of ([U.sub.0], [U.sub.S]) given ([X.sub.0] = [X.sub.0], [X.sub.s] = [x.sub.s]).

(11) [n.sup.1/2-[delta]/4][h.sup.[delta]/[delta]*-1/2-[delta]/4] = O(1).

The asymptotic normality of the local linear quantile estimator at the left boundary point is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

Further, the asymptotic normality of the local constant quantile estimator at the left boundary point [u.sub.0] = ch for 0 < c < 1 is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

From the above theorem, one can deduce that, at the boundaries, the asymptotic bias term for the local constant quantile estimate is of the order h, compared to the order [h.sup.2] for the local linear quantile estimate. Hence, the local linear estimation possesses good behavior at boundaries and there is no need for any boundary correction. In other words, the local linear quantile estimate does not suffer from boundary effects but the local constant quantile estimate does. Therefore, local linear quantile is preferable in practice.

3. Proposed Method

This section elaborates on the proposed method. This technique combines EMD and LLQ (EMD-LLQ). Since local linear quantile regression produces excellent boundary treatment , it is expected that the addition of this component to empirical mode decomposition will result in equally well-boundary properties. Results from numerical experiments extremely support this claim.

The basic idea behind the proposed method is to estimate the underlining function f with the sum of a set of EMD functions, [f.sub.EMD], and an LLQ function, [f.sub.LLQ]. That is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

We need to estimate the two components [f.sub.EMD] and [f.sub.LLQ] to obtain our proposed estimate, [[??].sub.EMD*LLQ], by the following steps.

(1) Applying LLQ to the corrupted and noisy data, [y.sub.i] and obtaining the trend estimate [[??].sub.LLQ].

(2) Determining the residuals e; from LLQ; that is, [e.sub.i] = [y.sub.i] - [[??].sub.LLQ].

(3) Applying EMD to [e.sub.i], given that the remaining series is expected to be hidden in the residuals. This step is accomplished by performing the following substeps. (I) (II) (III) (IV)

(I) Setting initial estimates for the residue as [r.sub.0](t) = e, [g.sub.0](t) = [r.sub.k-1](t), i = 1, and the index of IMF as k = 1.

(II) Constructing the lower minima [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] envelopes of the signal by the cubic spline method.

(III) Calculating the mean values by averaging the upper envelope and the lower envelope. Setting [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

(IV) Subtracting the mean from the original signal as [g.sub.i] = [g.sub.i-1] - [m.sub.i-1] and i = i + 1. Steps I to IV are repeated until [g.sub.i] becomes an IMF. The kth IMF is then given as [IMF.sub.K] = [g.sub.i].

(V) Updating the residue [r.sub.k(x)] = [r.sub.k-1](n) - [IMF.sub.K]. This residual component is regarded as a new datum and is subjected to the process described above to calculate the next [IMF.sub.K+1].

(VI) The steps above are repeated until the final residual component r(x) becomes a monotonic function. The final estimation of the residue [??](x) is then considered.

(4) The final estimate is the summation of the fitting estimates from LLQ and EMD, as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

4. Simulation Study

In this simulation, the software package R was employed to evaluate classical EMD by  and the proposed combined method, EMD-LLQ. The following conditions were set.

(1) Three different test functions (Table 1).

(2) Three different values of quantile [tau] (0.25, 0.50, and 0.75).

(3) Three different kinds of noise structure errors, namely:

(a) normal distribution with zero mean and unity variance,

(b) correlated noise from the first-order autoregressive model AR (1) with parameter (0.5),

(c) heavy-tailed noise from t distribution with three degrees of freedom.

Datasets were simulated from each of the three test functions with a sample size of n = 100 (Figure 1). For each simulated dataset, the above two methods were applied to estimate the test function. In each case, 1,000 replications of the sample size n = 100 were made. The mean squared error (MSE) was used as the numerical measure to assess the quality of the estimate. The MSE was calculated for those observations that were at most 10 sample points away from the boundaries of the test functions:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

where N([DELTA]) = {1, ..., [DELTA], n - [DELTA] + 1, ..., n}.

To compare the methods, Tables 2, 3, and 4 present the numerical results of the classical EMD with respect to the proposed method.

4.1. Results. From the simulation results, reported in Tables 2, 3, and 4, we have observed the following. Regardless of the boundary assumptions, test functions, noise structures, and different values of quantile, the proposed method is constantly superior to the classical EMD under periodic, symmetric (Mirror) and wave conditions. Tables 2, 3, and 4 summarize the results.

To ensure that the improvement in mean squared error is due to our proposed treatment, not to something else, we evaluated the classical method and our proposed one when no boundary treatment has been set up at all. From simulation result, we observed that even though the classical solutions help improve the mean squared error, our improvement is much better. Then, at the end, to get rid of some suspicions that the differences might not be significant, we used rank Wilcoxon test. This provided us evidence that our proposed method still achieves a better performance near the boundaries than EMD. All P value for Wilcoxon test are less than 0.05.

5. Conclusions

In this study, a new two-stage method is introduced to decrease the effects of the boundary problem in EMD. The proposed method is based on a coupling of LLQ at the first stage and classical EMD at the second stage. The empirical performance of the proposed method was tested on different numerical experiments by simulation and real data application. The results of these experiments illustrate the improvement of the EMD estimation in terms of MSE.

http://dx.doi.org/10.1155/2014/731827

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to thank the School of Mathematical Sciences Universiti Sains Malaysia for the financial support.

References

 N. E. Huang, Z. Shen, S. R. Long et al., "The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis," Proceedings of the Royal Society A, vol. 454, no. 1971, pp. 903-995, 1998.

 Z. Liu, "A novel boundary extension approach for empirical mode decomposition," in Intelligent Computing, vol. 4113 of Lecture Notes in Computer Science, pp. 299-304, Springer, Berlin, Germany, 2006.

 W. Wang, X. Li, and R. Zhang, "Boundary processing of HHT using support vector regression machines," in Computational Science--ICCS 2007, vol. 4489 of Lecture Notes in Computer Science, pp. 174-177, Springer, Berlin, Germany, 2007.

 J. Zhao and D. Huang, "Mirror extending and circular spline function for empirical mode decomposition method," Journal of Zhejiang University Science, vol. 2, no. 3, pp. 247-252, 2001.

 K. Zeng and M.-X. He, "A simple boundary process technique for empirical mode decomposition," in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '04), pp. 4258-4261, September 2004.

 Z. Zhao and Y. Wang, "A new method for processing end effect in empirical mode decomposition," in Proceedings of the International Conference on Communications, Circuits and Systems (ICCCAS '07), pp. 841-845, July 2007

 J. Wang, Y. Peng, and X. Peng, "Similarity searching based boundary effect processing method for empirical mode decomposition," Electronics Letters, vol. 43, no. 1, pp. 58-59, 2007

 R. Koenker and G. Bassett Jr., "Regression quantiles," Econometrica, vol. 46, no. 1, pp. 33-50, 1978.

 M. Buchinsky, "Quantile regression, Box-Cox transformation model, and the U.S. wage structure, 1963-1987," Journal of Econometrics, vol. 65, no. 1, pp. 109-154, 1995.

 Y. Wei, A. Pere, R. Koenker, and X. He, "Quantile regression methods for reference growth charts," Statistics in Medicine, vol. 25, no. 8, pp. 1369-1382, 2006.

 P Chaudhuri, "Nonparametric estimates of regression quantiles and their local Bahadur representation," The Annals of Statistics, vol. 19, no. 2, pp. 760-777, 1991.

 K. Yu and M. C. Jones, "Local linear quantile regression," Journal of the American Statistical Association, vol. 93, no. 441, pp. 228-237, 1998.

 R. Koenker, Quantile Regression, John Wiley & Sons, New York, NY, USA, 2005.

 H.-G. Muller and J.-L. Wang, "Hazard rate estimation under random censoring with varying kernels and bandwidths," Biometrics, vol. 50, no. 1, pp. 61-76, 1994.

 H.-G. Muller, "Smooth optimum kernel estimators near endpoints," Biometrika, vol. 78, no. 3, pp. 521-530, 1991.

 H.-S. Oh, P Naveau, and G. Lee, "Polynomial boundary treatment for wavelet regression," Biometrika, vol. 88, no. 1, pp. 291-298, 2001.

 H.-S. Oh and T. C. M. Lee, "Hybrid local polynomial wavelet shrinkage: wavelet regression with automatic boundary adjustment," Computational Statistics & Data Analysis, vol. 48, no. 4, pp. 809-819, 2005.

 Q. Wu and S. D. Riemenschneider, "Boundary extension and stop criteria for empirical mode decomposition," Advances in Adaptive Data Analysis, vol. 2, no. 2, pp. 157-169, 2010.

 Y. Deng, W. Wang, C. Qian, Z. Wang, and D. Dai, "Boundary-processing-technique in EMD method and Hilbert transform," Chinese Science Bulletin, vol. 46, no. 11, pp. 954-961, 2001.

 D.-C. Lin, Z.-L. Guo, F.-P. An, and F.-L. Zeng, "Elimination of end effects in empirical mode decomposition by mirror image coupled with support vector regression," Mechanical Systems and Signal Processing, vol. 31, pp. 13-28, 2012.

 C. D. Blakely, A Fast Empirical Mode Decomposition Technique for Nonstationary Nonlinear Time Series, vol. 3, Elsevier Science, New York, NY, USA, 2005.

 A. Amar and Z. El abidine Guennoun, "Contribution of wavelet transformation and empirical mode decomposition to measurement of US core inflation," Applied Mathematical Sciences, vol. 6, no. 135, pp. 6739-6752, 2012.

 Y. Fan, J. W. Zhi, and S. L. Yuan, "Improvement in time-series trend analysis," Computer Technology and Development, vol. 16, pp. 82-84, 2006.

 R. Koenker, P Ng, and S. Portnoy, "Quantile smoothing splines," Biometrika, vol. 81, no. 4, pp. 673-680, 1994.

 D. Ruppert, S. J. Sheather, and M. P Wand, "An effective bandwidth selector for local least squares regression," Journal of the American Statistical Association, vol. 90, pp. 1257-1270, 1995.

 X. Xu, Semiparametric quantile dynamic time series models and their applications [Ph.D. thesis], University of North Carolina at Charlotte, Charlotte, NC, USA, 2005.

 Z. Cai and X. Xu, "Nonparametric quantile estimations for dynamic smooth coefficient models," Journal of the American Statistical Association, vol. 103, no. 484, pp. 1595-1608, 2008.

Abobaker M. Jaber, (1) Mohd Tahir Ismail, (1) and Alssaidi M. Altaher (2)

(1) School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Minden, Penang, Malaysia

(2) Statistics Department, Sebha University, Sebha 00218, Libya

Correspondence should be addressed to Abobaker M. Jaber; jaber3t@yahoo.co.uk

Received 22 November 2013; Revised 30 January 2014; Accepted 4 February 2014; Published 25 March 2014

```
TABLE 1: Formula of the test functions used in the simulation.

Test function                   Formula

1                         f(x) = sin([pi]x) -
sin(2[pi]x) + 0.5x

2                f(x) = [10e.sup.-10x] + 2 if x [less
than or equal to] 0.5 3cos(10[pi]x)
0.5 < x < 1

3                    [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII]

TABLE 2: The MSE of the classical EMD and proposed method under
variety of boundary solution noise structure, different values
of quantile [tau] (0.25, 0.50, and 0.75), and sample size 100.

Test function 1

Quantiles               [tau] = 0.50

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (none)       0.269175    0.310785     0.32917
EMD-LLQ           0.11712    0.122119    0.145831
Wilcoxon test     451927      459306      449112
V =

Test function 1

Quantiles                  [tau] = 0.25

Errors            N~(0,1)     T(100,3)     AR(0.5)

EMD (none)       0.261690    0.3027736    0.3190445
EMD-LLQ           0.06343     0.058571     0.010130
Wilcoxon test      49589       497928       480711
V =

Test function 1

Quantiles                 [tau] = 0.75

Errors            N~(0,1)     T(100,3)     AR(0.5)

EMD (none)       0.2552796    0.306369     0.3223
EMD-LLQ           0.070048    0.067575     0.10510
Wilcoxon test      493926      494605      479475
V =

Test function 1

Quantiles                   [tau] = 0.25

Errors             N~(0,1)    T(100,3)     AR(0.5)

EMD (periodic)     1.38123     1.40508     1.45777
EMD-LLQ            0.11287     0.12451     0.14150
Wilcoxon test      500496       50076      500427
V =

P-value < 2.2[e.sup.-16]

Test function 1

Quantiles                    [tau] = 0.50

Errors             N~(0,1)     T(100,3)     AR(0.5)

EMD (periodic)     1.42545     1.43629      1.47529
EMD-LLQ            0.06375     0.05967      0.09826
Wilcoxon test      500500       500076       500477
V =

P-value < 2.2[e.sup.-16]

Test function 1

Quantiles                     [tau] = 0.75

Errors             N~(0,1)     T(100,3)     AR(0.5)

EMD (periodic)     1.40223      1.41718     1.44426
EMD-LLQ            0.06509      0.06284     0.10475
Wilcoxon test       500493      500490      500476
V =

P-value < 2.2[e.sup.-16]

Test function 1

Quantiles                 [tau] = 0.25

Errors              N~(0,1)    T(100,3)     AR(0.5)

EMD (symmetric)     0.86598     0.91897    1.005607
EMD-LLQ             0.11656     0.12304     0.14698
Wilcoxon test       500500      499701      500456
V =

P-value < 2.2[e.sup.-16]

Test function 1

Quantiles                    [tau] = 0.50

Errors              N~(0,1)     T(100,3)     AR(0.5)

EMD (symmetric)     0.87603     0.916301     1.005788
EMD-LLQ             0.8760      0.05816      0.09760
Wilcoxon test       500498       499540       500497
V =

P-value < 2.2[e.sup.-16]

Test function 1

Quantiles                    [tau] = 0.75

Errors              N~(0,1)     T(100,3)     AR(0.5)

EMD (symmetric)    0.8820793     0.92558     0.96602
EMD-LLQ             0.070185     0.06459     0.10498
Wilcoxon test        500500      500497      500492
V =

P-value < 2.2[e.sup.-16]

Test function 1

Quantiles                 [tau] = 0.25

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (wave)        1.20811     1.18084    1.250195
EMD-LLQ           0.11318     0.11967     0.14845
Wilcoxon test     500500      500500      500500
V =

Test function 1

Quantiles                 [tau] = 0.50

Errors            errors      N~(0,1)      T(100,3)

EMD (wave)        1.19274     1.20855      1.226365
EMD-LLQ           0.06456     0.06173      0.098486
Wilcoxon test     500500       500500       500500
V =

Test function 1

Quantiles                  [tau] = 0.75

Errors            AR(0.5)      Errors     N ~(0,1)

EMD (wave)        1.199136     1.20274     1.22829
EMD-LLQ           0.066864     0.06339     0.10583
Wilcoxon test      500500      500500      500500
V =

TABLE 3: The MSE of the classical EMD and proposed method under
variety of boundary solution noise structure, different values of
quantile [tau] (0.25, 0.50, and 0.75), and sample size = 100.

Test function 2

Quantiles                  [tau] = 0.25

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (none)       13.71622    8.517261    7.290762
EMD-LLQ          2.200657    2.148026    2.063819
Wilcoxon test     467459      463117      500260
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                  [tau] = 0.50

Errors           N ~(0,1)     T(100,3)     AR(0.5)

EMD (none)       7.818736     8.094024     6.974746
EMD-LLQ          2.147824    0.8982332     0.847309
Wilcoxon test     499774       498875       489150
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                 [tau] = 0.75

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (none)       7.168348    8.445187    7.364002
EMD-LLQ           1.68474     1.69877     1.57035
Wilcoxon test     482567      483765      489150
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                 [tau] = 0.25

Errors             N~(0,1)    T(100,3)     AR(0.5)

EMD (periodic)    7.430924    6.976339     6.77826
EMD-LLQ           2.118629    2.145495    2.052055
Wilcoxon test      498562      497328      495251
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                   [tau] = 0.50

Errors            N ~(0,1)     T(100,3)     AR(0.5)

EMD (periodic)     7135594     6.982454     6.827251
EMD-LLQ           0.904618    0.9031342    0.8702984
Wilcoxon test      500441       500350       500203
V =

P-value < 2.2[e.sup.-16]

Test function 2

uantiles                    [tau] = 0.75

Errors             N~(0,1)    T(100,3)     AR(0.5)

EMD (periodic)    7.226472    6.865929    6.725184
EMD-LLQ           1.672161    1.691775    1.598264
Wilcoxon test      498068      496900      496766
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                  [tau] = 0.25

Errors              N~(0,1)    T(100,3)     AR(0.5)

EMD (symmetric)    8.693953    8.800117    8.678484
EMD-LLQ            2.121942    2.142339    2.059669
Wilcoxon test       500500      500500      500500
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                    [tau] = 0.50

Errors             N ~(0,1)     T(100,3)     AR(0.5)

EMD (symmetric)    8.679718     8.902223     8.76185
EMD-LLQ            0.918506    0.8823274    0.8637991
Wilcoxon test       500500       500500       500500
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                    [tau] = 0.75

Errors              N~(0,1)    T(100,3)     AR(0.5)

EMD (symmetric)    8.623087    8.756977    8.718655
EMD-LLQ            1.655741    1.670038    1.571081
Wilcoxon test       500500      500500      500500
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                 [tau] = 0.25

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (wave)       7.243865    7.493569    7.341059
EMD-LLQ           2.12519    2.149727    2.053073
Wilcoxon test     500500      500500      500500
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                   [tau] = 0.50

Errors           N ~(0,1)     T(100,3)     AR(0.5)

EMD (wave)       7.138806     7.430882      7.449
EMD-LLQ          0.908747    0.9056965    0.8560205
Wilcoxon test     500500       500500       500500
V =

P-value < 2.2[e.sup.-16]

Test function 2

Quantiles                 [tau] = 0.75

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (wave)       7.124459    7.374468    7.302789
EMD-LLQ          1.653102    1.704716    1.552656
Wilcoxon test     500500      500497      500500
V =

P-value < 2.2[e.sup.-16]

TABLE 4: The MSE of the classical EMD and proposed method under
variety of boundary solution noise structure, different values of
quantile [tau] (0.25, 0.50, and 0.75), and sample size = 100.

Test function 3

Quantiles                 [tau] = 0.25

Errors            N~(0,1)    T(100,3)    AR(0.5)

EMD (none)       0.034744    0.037490    0.06867
EMD-LLQ           0.01494     0.01827    0.04637
Wilcoxon test     453610      426890      371062
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                   [tau] = 0.50

Errors            N~(0,1)      T(100,3)      AR(0.5)

EMD (none)        0.035888     0.035473     0.0708678
EMD-LLQ          0.0142052    0.0168427    0.04288602
Wilcoxon test      740664       435835       415746
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                 [tau] = 0.75

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (none)        0.03540     0.0375      0.07260
EMD-LLQ          0.014913     0.0197     0.048020
Wilcoxon test     455217      420619      393951
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                  [tau] = 0.25

Errors             N~(0,1)    T(100,3)    AR(0.5)

EMD (periodic)     0.01495     0.01601    1.46184
EMD-LLQ            0.01201     0.01425    0.14555
Wilcoxon test      142798      167295      500426
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                     [tau] = 0.50

Errors             N~(0,1)      T(100,3)      AR(0.5)

EMD (periodic)     0.014931     0.017672     1.475389
EMD-LLQ            0.010031     0.016859     0.0967748
Wilcoxon test       159630       193475       500491
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                   [tau] = 0.75

Errors             N~(0,1)    T(100,3)     AR(0.5)

EMD (periodic)     0.01513     0.0173      1.44361
EMD-LLQ            0.01176     0.0157      0.10338
Wilcoxon test      133946      170435      500474
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                   [tau] = 0.25

Errors              N~(0,1)    T(100,3)    AR(0.5)

EMD (symmetric)    0.856264    0.938382    1.01019
EMD-LLQ             0.11334     0.12452    0.14649
Wilcoxon-test       500489      499304      500420
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                      [tau] = 0.50

Errors              N~(0,1)      T(100,3)      AR(0.5)

EMD (symmetric)     0.853808     0.928291     0.9873542
EMD-LLQ             0.065446     0.056100    0.09569597
Wilcoxon-test        500500       500498       500474
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                    [tau] = 0.75

Errors              N~(0,1)    T(100,3)     AR(0.5)

EMD (symmetric)     0.86854     0.9322      0.96783
EMD-LLQ             0.06553     0.06180    0.104910
Wilcoxon-test       500500      500485      500463
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                 [tau] = 0.25

Errors            N~(0,1)    T(100,3)    AR(0.5)

EMD (wave)        1.19549     1.19853    1.22884
EMD-LLQ           0.11508     0.12732    0.14624
Wilcoxon test     500500      500500      500500
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                    [tau] = 0.50

Errors            N~(0,1)      T(100,3)      AR(0.5)

EMD (wave)        1.180275     1.200136     1.217799
EMD-LLQ           0.063552     0.055484     0.094623
Wilcoxon test      500500       500500       500500
V =

P-value < 2.2[e.sup.-16]

Test function 3

Quantiles                  [tau] = 0.75

Errors            N~(0,1)    T(100,3)     AR(0.5)

EMD (wave)       1.193706     1.20841    1.236941
EMD-LLQ           0.06860     0.20362    0.110907
Wilcoxon test     500500      499500      500500
V =

P-value < 2.2[e.sup.-16]
```
Title Annotation: Printer friendly Cite/link Email Feedback Research Article Jaber, Abobaker M.; Ismail, Mohd Tahir; Altaher, Alssaidi M. Abstract and Applied Analysis Report Jan 1, 2014 5565 Exact solution for non-self-similar wave-interaction problem during two-phase four-component flow in porous media. Non-single-valley solutions for p-order Feigenbaum's type functional equation f([phi](x)) = [[phi].sup.p](f(x)). Decomposition (Mathematics) Mathematical research Regression analysis Time series analysis Time-series analysis