Printer Friendly

Time scale in least square method.

1. Introduction

Although theoretical to a high extent, time scale tries to link a bridge between continuous and discrete analysis [1, 2]. Such calculations provide an integrated structure for the analysis of difference and differential equations [3-5]. Those equations [6, 7] have been applied in dynamic programming [8-13], neural network [10, 14, 15], economic modeling [10, 16], population biology [17], quantum calculus [18], geometric analysis [19], real-time communication networks [20], intelligent robotic control [21], adaptive sampling [22], approximation theory [13], financial engineering [12] on time scales, and switched linear circuits [23] among others.

This study deals with the estimation of parameters of regression equation by using least squares method through time scale.

The main purpose of least squares method is to minimize the sum of squared vertical deviations. For this, to find the coefficients in simple linear regression model Y = [[beta].sub.10] + [[beta].sub.1]x + [epsilon], partial derivatives related to coefficients of equation Q = [[summation].sup.n.sub.i=1] [[epsilon].sub.i.sup.2] = [[summation].sup.n.sub.i=1] [([y.sub.i] - [[beta].sub.0] - [[beta].sub.1][x.sub.i]).sup.2] are obtained and the stage takes place, in which normal equations are to be obtained by setting partial derivatives for each coefficient equal to zero. And from normal equations, equation [y.sub.i] = [[??].sub.0] + [[??].sub.1][x.sub.i] + [e.sub.i], predictive model for (9) is obtained.

In statistics, least squares method is used in accordance with the known derivative definition when parameters of regression equation are obtained. In this study, time scale derivative definition is applied to the least squares method. In this regard, (9) simple linear regression model is considered and [[??].sub.0] and [[??].sub.1], estimators of [[beta].sub.0] and [[beta].sub.1] coefficients, are obtained in accordance with forward and backward jump operators. Different [[??].sub.0] and [[??].sub.1] parameter values are obtained, originating from forward and backward jump operators related to the [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] simple linear regression equation. In cases when observation values are discrete, forward jump operator equation [sigma](t) = t + 1 and backward jump operator equation [rho](t) = t - 1 are taken.

The main approach in regression analysis is to minimize the sum of squared (vertical) deviations between actual and estimated values. This is a weighed method for regression analysis due to its statistical properties [24]. Here, the point which should be considered is that, since in least squares method, sum of squared vertical deviations is of interest, analysis would also operate accordingly. The issue of applying first and second derivatives should also be taken into consideration since the aim is to minimize the sum of squared vertical deviations.

The study consists of two main parts and in the last part takes place an implementation regarding time scale theory. Main parts discussed are time scale theory and simple linear regression, respectively. Time scale theory part includes an explanation of time scale derivative related to and the definitions of forward and backward jump operators. The other main part includes explanations to simple linear regression model being in the form of (9) and to the calculation of [[??].sub.0] and [[??].sub.1], estimators of [[beta].sub.0] and [[beta].sub.0], by using the method of least squares. Time scale derivative definition includes normal equations for forward and backward jump operators, as well as [[??].sub.0] and [[??].sub.1] values, which are [[beta].sub.0] and [[beta].sub.1] estimators of forward and backward jump operators.

2. Time Scale Preliminaries

Bohner and Peterson [1] suggested the concept of time scale in their studies. Their purpose was to bring together discrete analysis and continuous analysis under one model. Any closed nonempty subset T of real numbers R is called as time scale and is indicated with symbol T. Thus, R, Z, N, [N.sub.0], real numbers, integers, natural numbers, and positive natural numbers, respectively, are examples of time scale and considered as [0, 1] [union] [2, 3], [0, 1] [union] N; meaning L [not equal to] 0 and L [subset] R meaning [] [union] [] [union] ... = R.

Q, R \ Q, C, (0,1), rational numbers, irrational numbers, complex numbers, and open interval of 0 and 1, respectively, are not included in time scale. Since time scale is closed, it can clearlybe seen that rational numbers are never a time scale.

Delta derivative [f.sup.[DELTA]] for function f defined at T is actually defined as follows.

(i) If [f.sup.[DELTA]] = f', T = R, then it is a general derivative.

(ii) If [f.sup.[DELTA]] = [DELTA]f, T = Z, then it is a forward difference operator.

Definition 1. Given the function f : T [right arrow] R and t [member of] [T.sup.K] [member of] > 0, there exists a neighborhood U of t(H = (t - [delta], t + [delta])) and [delta] > 0 such that

[absolute value of [f([sigma](t)) - f(s)] - [f.sup.[DELTA]] (t) [[sigma](t) - s]] [less than or equal to] [epsilon][absolute value of [sigma](t) - s] (1)

is defined for all s [member of] U, and called as delta derivative of [f.sup.[DELTA]](t)f at t [1, 25, 26].

Definition 2. Given the function f : T [right arrow] R and t [member of] [T.sup.K] [member of] > 0, there exists a neighborhood U of t (H = (t - [delta], t + [delta])) and [delta] > 0 such that

[absolute value of [f([[rho]](t)) - f(s)] - [f.sup.[nabla]] (t) [[rho](t) - s]] [less than or equal to] [epsilon][absolute value of [rho](t) - s] (2)

is defined for all s [member of] H, and called as nabla derivative of [f.sup.[nabla]](t) at t [16, 26].

Definition 3. Given the function f: T [right arrow] R and t [member of] [T.sup.K] [member of] > 0, there exists a neighborhood U of t (H = (t - [delta], t + [delta])) and [delta] > 0 such that

[absolute value of [f([sigma](t)) - f(s)] - f([rho](t))] - [f.sup.c] (t) [[sigma](t) - [rho](t)]] [less than or equal to] [epsilon][absolute value of [sigma](t) - [rho](t)] (3)

is defined for all s [member of] U, and called as center derivative of [f.sup.c](f)f at t [27].

Definition 4 (forward jump operator [1]). Let T be a time scale. Then, for t [member of] T, forward jump operator [sigma] : T [right arrow] T is defined by

[sigma] (t) := min (s [member of] T : s > t}. (4)

For right-graininess function for all x [member of] T, [[mu].sub.[sigma]] : T [right arrow] [0, [infinity]) is defined as follows:

[[mu].sub.[sigma]] (t) := [sigma] (t) - t. (5)

Definition 5 (backward jump operator [1]). Let T be a time scale. Then, for t [member of] T, backward jump operator [rho] : T [right arrow] T is defined by

[rho](t) := max {s [member of] T : s < t}. (6)

Left-graininess function [[mu].sub.[rho]] : T [right arrow] [0, [infinity]) is defined as follows:

[[mu].sub.[rho]](t) := t - [rho](t). (7)

3. Simple Linear Regression Analysis

Simple linear regression [28, 29] consists of one single explanatory variable or independent variable or response variable or dependent variable. Let us take that a real relationship exists between Y and x and Y values observed at each x level are random. Expected value of Y at each x level is defined by the equation

E(Y | x) = [[beta].sub.0] + [[sigma].sub.1]x, (8)

where [[beta].sub.0] is the interceptor and [[beta].sub.1] is the slope and [[beta].sub.0] and [[beta].sub.1] are unknown regression coefficients. Each Y observation can be defined with the model below.

In equation

Y = [[beta].sub.0] + [[beta].sub.1] X + [epsilon] (9)

[epsilon] is the random error term with zero mean constitution of (unknown) variance [[sigma].sup.2].

Take n binary observations of ([x.sub.1], [y.sub.1]), ([x.sub.2], [y.sub.2]), ..., ([x.sub.n], [y.sub.n]). Figure 1 shows the distribution diagram of observed values and the estimated regression line. Estimators of [[beta].sub.0] and [[beta].sub.1] would pass the "best line" through the data. German scientist Karl Gauss (1777-1855) made a suggestion in estimation of parameters [[beta].sub.0] and [[beta].sub.1] in (9) to minimize the sum of squared vertical deviations in Figure 1.

This criterion used in estimation of regression parameters is the least squares method. By using (9), n observations in the sample can be defined as follows:

[y.sub.i] = [[beta].sub.0] + [[beta].sub.1][x.sub.i] + [[epsilon].sub.i], i = 1, 2, ..., n (10)

and sum of squared deviations of observations from the actual regression line is as follows:

Q = [n.summation over (i=1)] [[epsilon].sup.2.sub.i] = [n.summation over (i=1)] [([y.sub.i] - [[beta].sub.0] - [[beta].sub.1][x.sub.i]).sup.2]. (11)

Least square estimators of [[beta].sub.0] and [[beta].sub.1] are [[??].sub.0] and [[??].sub.1], which may be calculated as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (12)

When these two equations are simplified

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (13)

These equations may be called as normal equations of least squares.

Thus, estimated regression line will be as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (14)

Each observation pair is provided with the relation below:

[y.sub.i] = [[??].sub.Q] + [[??].sub.1][x.sub.i] + [e.sub.i], i = 1, 2, ..., n, (15)

where [e.sub.i] = [y.sub.i] - [[??].sub.i]: residuals; [y.sub.i]: observation values; [[??].sub.i]: estimated [y.sub.i] values.

4. Ordinary Least Square Method

4.1. Normal Equations. These include normal situation and formula values of [[??].sub.0] and [[??].sub.1] estimators for forward and backward operators.

Normal equations in normal (usual) situations

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (16)

Time scale normal equations (forward jump operator)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (17)

Time scale normal equations (backward jump operator)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (18)

4.2. Graphs with Data 10-20-30-40-50 of Normal Situation, Forward and Backward Jump Operator. Respective illustrations of simple linear regression equations obtained from samples of 10, 20, 30, 40, and 50 data for normal situation forward and backward jump operators are shown in Figure 2.

Sum of vertical distances between simple linear regression model for normal and forward and backward operators and observation values [Y.sub.i] for sample sizes of n = 10, 20, 30, 40, and 50 are provided in Table 1.

There is a relationship between sample size and sum of vertical distances between regression line and observation values [Y.sub.i]. This is derived from the result of sum of vertical distances between regression lines equal to half of sample size and observation values [Y.sub.i]. Results of implementation can be seen in Table 1.

4.3. Minimum Test. Let (k, m), Q(a, b) the critical point; meaning: [Q.sub.a](k, m) = 0 and [Q.sub.b](k, m) = 0

L = [Q.sub.aa] (k, m) [Q.sub.bb] (k, m) - [[[Q.sub.ab] (k, m)].sup.2]. (19)

If L > 0 and [Q.sub.aa](k, m) > 0, Q(k, m) has a minimum value. Minimum test calculated according to normal derivative definition is as follows [30]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (20)

Minimum test calculated according to time scale derivative definition for forward and backward jump operators is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (21)

Since L > 0 and [Q.sub.aa] > 0 always, Q(k, m) always has a minimum value.

5. Results

In statistics, least squares method is used in accordance with the known derivative definition when parameters of regression equation are obtained. In the study, time scale derivative definition is applied to the least squares method. In this regard, (9) simple linear regression model is considered and both [[??].sub.0] and [[??].sub.1], estimators of [[beta].sub.0] and [[beta].sub.1] coefficients, are obtained in accordance with forward and backward jump operators. Different [[??].sub.0] and [[??].sub.1] parameter values are obtained, originating from forward and backward jump operators related to the [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] simple linear regression equation. In cases when observation values are discrete, forward jump operator equation [sigma](t) = t + 1 and backward jump operator equation [rho](t) = t - 1 are taken [31].

The study includes the analysis of integers Z in accordance with time scale derivative definition. Standard approach in regression analysis is to minimize the sum of squared (vertical) deviations between actual and estimated values. Here, the point which should be considered is that, since in least squares method, sum of squared vertical deviations is of interest, analysis would also operate accordingly. The issue of applying first and second derivatives should also be taken into consideration since the aim is to minimize the sum of squared vertical deviations. Since L > 0 and [Q.sub.aa] > 0 always, Q(k, m) always has a minimum value.

Least square method yields results such that sum of vertical deviations is minimum. When least squares method is used according to time scale derivative definition, a relationship emerges between sample size and sum of vertical distances between regression line and observation values [Y.sub.i]. This is derived from the sum of vertical distances between regression lines equal to half of sample size and observation values [Y.sub.i]. To minimize the sum of vertical distances, different [[??].sub.0] and [[??].sub.1] parameter values resulting from forward and backward jump operators for simple linear regression equation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] have been of interest. An alternative solution is to create a regression equation in time scale. As a conclusion, time scale derivative definition is applied to the study with integers Z and suggested solutions are proposed for the results obtained.

6. Discussion and Possible Future Studies

This study is only introducing a very basic derivative concept from the time scale and applying for obtaining the regression parameters. An extension of this study would be determining the integer Z apart from the value 1. This would be useful especially when the actual assumptions of linear regression model have been violated and need a robust estimation of linear regression line. Perhaps it is hard to determine an optimum of Z analytically. We would suggest that the study should be performed for generated data containing outliers that is replicated at least 1000 times.

It is also possible to extend the number of regressors from a single explanatory variable to multiple ones in the linear regression model and estimate the parameters using time scale of both forward and backward jump operators. Analytically speaking, it will not be easy to estimate the parameters in a simple form of equation using time scale. In fact we would be dealing with very complicated either formulas or matrices. However it maybe worth extending the study to a multiple linear regression model.

In the meantime taking Z to be 1 results with the sum of vertical distances between regression lines equal to half of sample size (See Table 1 and Figure 2). The Z value in both forward and backward jump operators should be determined in a way that the estimated regression line from both forward and backward operator is fairly close to the actual regression line especially when the assumptions of linear regression model are violated.

http://dx.doi.org/10.1155/2014/354237

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

[1] M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with Applications, Birkhauser, Boston, Mass, USA, 2001.

[2] W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, John Wiley & Sons, New York, NY, USA, 2011.

[3] E. Girejko, A. B. Malinowska, and D. F. M. Torres, "The contingent epiderivative and the calculus of variations on time scales," Optimization: A Journal of Mathematical Programming and Operations Research, vol. 61, no. 3, pp. 251-264, 2012.

[4] N. H. Du and N. T. Dieu, "The first attempt on the stochastic calculus on time scale," Stochastic Analysis and Applications, vol. 29, no. 6, pp. 1057-1080, 2011.

[5] R. Almeida and D. F. M. Torres, "Isoperimetric problems on time scales with nabla derivatives," Journal of Vibration and Control, vol. 15, no. 6, pp. 951-958, 2009.

[6] M. Bohner and A. Peterson, Eds., Advances in Dynamic Equations on Time Scales, Birkhauser, Boston, Mass, USA, 2003.

[7] M. Bohner, "Calculus of variations on time scales," Dynamic Systems and Applications, vol. 13, no. 3-4, pp. 339-349, 2004.

[8] J. Seiffertt, S. Sanyal, and D. C. Wunsch, "Hamilton-Jacobi-Bellman equations and approximate dynamic programming on time scales," IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 38, no. 4, pp. 918-923, 2008.

[9] Z. Han, S. Sun, and B. Shi, "Oscillation criteria for a class of second-order Emden-Fowler delay dynamic equations on time scales," Journal of Mathematical Analysis and Applications, vol. 334, no. 2, pp. 847-858, 2007.

[10] C. C. Tisdell and A. Zaidi, "Basic qualitative and quantitative results for solutions to nonlinear, dynamic equations on time scales with an application to economic modelling," Nonlinear Analysis: Theory, Methods & Applications, vol. 68, no. 11, pp. 3504-3524, 2008.

[11] C. Lizama and J. G. Mesquita, "Almost automorphic solutions of dynamic equations on time scales," Journal of Functional Analysis, vol. 265, no. 10, pp. 2267-2311, 2013.

[12] S. Sanyal, Stochastic Dynamic Equations [PhD dissertation], Missouri University of Science and Technology, Rolla, Mo, USA, 2008.

[13] Q. Sheng, M. Fadag, J. Henderson, and J. M. Davis, "An exploration of combined dynamic derivatives on time scales and their applications," Nonlinear Analysis: Real World Applications, vol. 7, no. 3, pp. 395-413, 2006.

[14] X. Chen and Q. Song, "Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales," Neurocomputing, vol. 121, no. 9, pp. 254-264, 2013.

[15] A. Chen and F. Chen, "Periodic solution to BAM neural network with delays on time scales," Neurocomputing, vol. 73, no. 1-3, pp. 274-282, 2009.

[16] F. M. Atici, D. C. Biles, and A. Lebedinsky, "An application of time scales to economics," Mathematical and Computer Modelling, vol. 43, no. 7-8, pp. 718-726, 2006.

[17] M. Bohner, M. Fan, and J. Zhang, "Periodicity of scalar dynamic equations and applications to population models," Journal of Mathematical Analysis and Applications, vol. 330, no. 1, pp. 1-9, 2007

[18] M. Bohner and T. Hudson, "Euler-type boundary value problems in quantum calculus," International Journal of Applied Mathematics & Statistics, vol. 9, no. J07, pp. 19-23, 2007

[19] G. Sh. Guseinov and E. Ozyilmaz, "Tangent lines of generalized regular curves parametrized by time scales," Turkish Journal of Mathematics, vol. 25, no. 4, pp. 553-562, 2001.

[20] I. Gravagne, R. J. Marks II, J. Davis, and J. DaCunha, "Application of time scales to real world time communications networks," in Proceedings of the American Mathematical Society Western Section Meeting, 2004.

[21] I. A. Gravagne, J. M. Davis, and R. J. Marks, "How deterministic must a real-time controller be?" in Proceedings of the IEEE IRS/RSJ International Conference on Intelligent Robots and Systems (IROS '05), pp. 3856-3861, Edmonton, Canada, August 2005.

[22] I. Gravagne, J. Davis, J. DaCunha, and R. J. Marks II, "Bandwidth reduction for controller area networks using adaptive sampling," in Proceedings of the International Conference on Robotics and Automation, pp. 2-6, New Orleans, La, USA, April 2004.

[23] R. J. Marks II, I. A. Gravagne, J. M. Davis, and J. J. DaCunha, "Nonregressivity in switched linear circuits and mechanical systems," Mathematical and Computer Modelling, vol. 43, no. 1112, pp. 1383-1392, 2006.

[24] J. S. Armstrong, Principles of Forecasting: A Handbook for Researchers and Practitioners, Kluwer Academic, Dordrecht, The Netherlands, 2001.

[25] D. R. Anderson and J. Hoffacker, "Green's function for an even order mixed derivative problem on time scales," Dynamic Systems and Applications, vol. 12, no. 1-2, pp. 9-22, 2003.

[26] R. Agarwal, M. Bohner, D. O'Regan, and A. Peterson, "Dynamic equations on time scales: a survey," Journal of Computational and Applied Mathematics, vol. 141, no. 1-2, pp. 1-26, 2002.

[27] Q. Sheng and A. Wang, "A study of the dynamic difference approximations on time scales," International Journal of Difference Equations, vol. 4, no. 1, pp. 137-153, 2009.

[28] J. Neter, M. H. Kutner, C. J. Nachtsheim, and W. Wasserman, Applied Linear Statistical Models, McGraw-Hill, New York, NY, USA, 4th edition, 1996.

[29] D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, John Wiley & Sons, New York, NY, USA, 3rd edition, 2002.

[30] J. R. Hass, F. R. Giordano, and M. D. Weir, Thomas' Calculus, Pearson Addison Wesley, 10th edition, 2004.

[31] R. P. Agarwal and M. Bohner, "Basic calculus on time scales and some of its applications," Results in Mathematics, vol. 35, no. 1-2, pp. 3-22, 1999.

Ozgur Yeniay, (1) Oznur Isci, (2) Atilla Goktaf, (2) and M. Niyazi Cankaya (3)

(1) Department of Statistics, Faculty of Sciences, Hacettepe University, Beytepe, 06800 Ankara, Turkey

(2) Department of Statistics, Faculty of Sciences, Mugla Sitki Kofman University, 48000 Mugla, Turkey

(3) Department of Statistics, Faculty of Sciences, Ankara University, Besevler, 06100 Ankara, Turkey

Correspondence should be addressed to (Ozgiir Yeniay; yeniay@hacettepe.edu.tr

Received 8 January 2014; Accepted 6 March 2014; Published 3 April 2014

Academic Editor: Dumitru Baleanu

TABLE 1: Values for results obtained.

Forn = 10

                            Sum of
     Simple linear         vertical    Observation   Estimation
  regression equation     deviations    [Y.sub.i]    [Y.sub.i]

Normal    3.27 + 1.01x        0            62            62
Fwd       7.42 - 0.59x        5            62            57
Bckwd    -0.89 + 2.62x        -5           62            67

For n = 20

                            Sum of
     Simple linear         vertical    Observation   Estimation
  regression equation     deviations    [Y.sub.i]    [Y.sub.i]

Normal    2.90 + 1.09x        0            155           15
Fwd      11.26 - 0.90x        10           155          145
Bckwd    -5.46 + 3.08x       -10           155          165

For n = 30

                            Sum of
     Simple linear         vertical    Observation   Estimation
  regression equation     deviations    [Y.sub.i]    [Y.sub.i]

Normal    3.45 + 0.97x        0            291          291
Fwd      14.17 - 0.77x        15           291          276
Bckwd    -7.27 + 2.70x       -15           291          306

For n = 40

                            Sum of
     Simple linear         vertical    Observation   Estimation
  regression equation     deviations    [Y.sub.i]    [Y.sub.i]

Normal    3.38 + 0.96x        0            448          448
Fwd      17.94 - 0.89x        20           448          428
Bckwd    -11.17 + 2.81x      -20           448          468

For n = 50

                            Sum of
     Simple linear         vertical    Observation   Estimation
  regression equation     deviations    [Y.sub.i]    [Y.sub.i]

Normal    3.54 + 0.93x        0            638          638
Fwd      22.07 - 0.99x        25           638          613
Bckwd    -15.00 + 2.86x      -25           638          663
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Yeniay, Ozgur; Isci, Oznur; Goktaf, Atilla; Cankaya, M. Niyazi
Publication:Abstract and Applied Analysis
Article Type:Report
Date:Jan 1, 2014
Words:3776
Previous Article:Blow-up solutions and global solutions to discrete p-Laplacian parabolic equations.
Next Article:Estimates of initial coefficients for bi-univalent functions.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |