Printer Friendly
The Free Library
23,375,127 articles and books


Time series econometrics and applied economics: a methodological perspective.

Many applied economists face problems in selecting an appropriate technique to estimate short and long run relationships with the time series methods. This paper reviews three alternative approaches viz., general to specific (GETS), vector autoregressions (VAR) and the vector error correction models (VECM). As in other methodological controversies, it is hard to say which one is the best. It is suggested that if these techniques are seen as tools to summarize data, as in Smith (2000), often there may be only minor differences in their estimates. Therefore, a computationally attractive technique is likely to be popular. Finally, we also explain that GETS is a simple and useful technique to understand some dicult choices in the VECM technique of Johansen.

JEL Classification: C230, C510, C520, C590.

Keywords: General to Specific Models, Cointegration, VAR Models, Alternative Methods, Short and Long Run Equations.

INTRODUCTION

There is hardly any applied economic work without some applied econometrics: However, often applied economists lose their economic perspective and digress into demonstrating their econometric skills. Underlying this trend is a belief that economics is more scientific than it actually is, and the accuracy of our explanations can be dramatically improved by using the latest econometric techniques and software. It is necessary, therefore, to have a good perspective of what we do as applied economists and to what extent econometric techniques and, in particular, time series techniques are useful.

We follow Smith (2000) and distinguish between three stages in applied economic research viz., (1) purpose, (2) summary and (3) interpretation. Within this threefold classification, econometric techniques are tools to provide summaries of data. Reliable and qualitatively unambiguous summaries are useful for interpretation and examine if they serve the purpose. Therefore, in the day to day applied economic work there is more than just preparing summaries of facts with alternative econometric techniques. The latest econometric techniques may improve the accuracy of these summaries and at times only marginally e.g., improving a t-ratio from 1.9 to 2.0, but it is doubtful if such improvements alone are adequate and more important than the other two objectives in applied research. Nevertheless, it is desirable to use more than one technique to find if alternative techniques give similar or conflicting quantitative or qualitative summaries of the same set of facts. If they all yield similar summaries, with minor dierences, that increases ones confidence in their usefulness.

Given that there are alternative procedures, options and more than one time series technique to use, e.g., testing for unit roots and estimating cointegrating equations, applied economists often forget the first and last objectives in Smith's threefold classification and digress into endlessly using alternative econometric techniques with the expectation that these techniques provide definite answers to issues that fall into the other two stages in applied economic work. Smith (2006) says that there are about 100 options in EViews 5 (2004) to estimate cointegrating equations and it is dicult to decide which is the best. Furthermore, it is not uncommon for journals to reject papers because they did not use the latest econometric technique or a less frequently used technique developed by a referee. One may be excused for saying that the economics profession has become like a firm that endlessly produces improved prototype products, without ever producing a finished product for the market. The declining enrollments in economics courses is an indication of this trend. In this paper we shall look into some popular time series techniques with these perspectives with a view to develop a few pragmatic methodological alternatives for the applied economists.

Two developments have changed the way applied economists have been using econometric techniques. These are the econometrics of dynamic specifications based on the error correction model (ECM) of the London School of Economics (LSE) and the unit roots and cointegration revolution. This paper briefly looks into these two developments with a view to improve the economic perspective in applied work. Its structure as follows. Section II describes some hardline methodological issues. In Section III some frequently used econometric techniques are discussed and the some conceptual merits of the ECM based general to specific approach (GETS) of the LSE are explained. In spite of some criticisms, GETS is a useful technique to understand some difficult concepts and steps in the widely used but more demanding Johansen's method of estimating cointegrating equations. In Section IV three alternative techniques are used to estimate demand for money in the USA. However, since our main purpose is to illustrate these methods, only some selected and useful results are presented. Summary and conclusions are in Section V.

A METHODOLOGICAL DIGRESSION

In the late 1960s some LSE economists realized that there is a basic methodological conflict between the equilibrium nature of theoretical relationships and the data used to estimate them. (1) From the economists' point of view, data are collected from a world that is seldom in a state of equilibrium, and economic theory hardly gives insights into their dynamic adjustment process of the variables. There is thus a methodological problem in testing the validity of equilibrium theories with disequilibrium data. In the past, theoretical relationships were given ad hoc dynamic specifications e.g., partial adjustment, Almon lags etc. The LSE economists have argued that, since economic theory offers little guidance on the dynamics, a better (methodological) procedure is to empirically estimate these dynamics. Some LSE economists who took such views were W.B. Phillips, R.G. Lipsey and G.C. Archibald etc., who were influenced by the methodology of Karl Popper. This methodological view was also a favorite class room discussion topic at the LSE during the 1960s. However, the econometrics of this approach was not obvious at that time.

These methodological insights by the LSE economists have been subsequently used by the LSE econometricians to develop econometric techniques to empirically estimate the dynamics. They ensured that their approach is consistent with the underlying data generating process (DGP), and therefore started with a very general unrestricted dynamic model (GUM) and developed procedures to reduce GUMs into parsimonious specifications. For this reason the LSE econometrics is known as the general to specific approach (GETS). The key element in the GETS approach is Phillips' ECM based adjustment, from his contributions to stabilization policy, to determine optimal values for policy instruments to maintain target policy variables close to their desired values. As Alagoskoufis and Smith (1991) have noted, ECM is very much an LSE story of Phillips, Saragan and Hendry in particular.

The LSE approach is now frequently referred to as the Hendry approach, since Hendry is its strongest proponent and made several valuable contributions. Granger (2003), in his Nobel Prize lecture, also acknowledges that the Engle-Granger concept of cointegration was inspired by the LSE concept of ECM. Thus both ECM and GETS have changed the Way equilibrium theoretical relationships are given dynamic empirical specifications and estimated. At first, ECM and GETS did not raise doubts on the validity of the classical methods of estimation and their summary test statistics. ECM and GETS were the beginning of the end of the then popular partial adjustment models.

However, the importance of the LSE-Hendry approach prematurely ended due to a second and the most influential development by Engle and Granger. This second development is the unit roots and cointegration revolution or econometrics with a time series perspective. Initially time series econometrics was used mainly to forecast a single variable, using its own past values and past errors. Therefore, this approach did not use any insights from economic theory. These techniques were developed by Box and Jenkins, known also as the Box-Jenkins (BJ) techniques. The BJ equations have outperformed the forecasts based on large scale econometric models and lead to a rethink on traditional econometric models. (2)

Some time series econometricians (with more background and interest in economics) have explored the scope for extending the uni-variate BJ methodology to multivariate models and also to take into account the implications of economic theory. This has lead to two developments viz., VAR and structural VAR models. Sims (1980) developed the VAR approach and it is popular with many US researchers. VAR models treat all the variables in a model as endogenous and estimate their reduced forms with a view to generate more accurate forecasts than the BJ equations. Implicit in this approach is the idea that to make forecasts of a variable Y, not only its past values are important, but one needs also the past values of all other variables like [X.sub.1], [X.sub.2] ... [X.sub.n] on which Y depends. This is an interesting insight and there are many followers of the VAR methodology. Nevertheless, as Evans (2003) has noted, if VAR methodology is put to a market test, it does not seem to have been a great success because no commercial forecasting firm seems to be using VAR models.

On the other hand, some economists think that VAR models are atheoretical. Therefore, they have developed structural VAR models in which the relationships implied by economic theory (between a set of variables) are incorporated into the VAR equations. There are two dierent approaches here viz., the Engle-Granger (1987) models based on cointegration and ECM and the structural VAR models (SVAR) in which the economic structure is recovered from the residuals of the VAR models. Of these two developments ECM models are more popular. Although initially ECM models are estimated with the single equation methods, the systems based method of Johansen (1988) has given a tremendous impetus to empirical work and is known as the vector error-correction approach (VECM). Other alternative single equation estimation methods have been also developed. These are the Phillips-Hansen (1990) fully modified OLS (FMOLS), the Stock-Watson (1993) Dynamic OLS (DOLS) and the Pesaran and Shin (1997) bounds test. In addition, Banarjee et.al. (1993) showed that the Hendry-GETS approach is also consistent with the Engle-Granger ECM approach and asymptotically GETS is as good as FMOLS. At times the single equation methods are attractive when there is no serious endogeneity problem and the sample size is limited because VECM may not yield any meaningful results.

Pre-testing the variables for their order of integration before the ECM and VECM techniques are applied is a necessary condition and this was made easy by Dicky and Fuller with their well known ADF tests. In the olden days the order of integration of a variable was determined by inspecting the graphs of its autocorrelation functions.

An important development following the ADF tests is the Nelson and Plosser (1982) finding that many macroeconomic economic variables are non-stationary. If so, this has two implications. First, various conventional summary and test statistics (based on the classical statistical methods), e.g., [[bar.R].sub.2], DW, [chi square] and F tests are not valid. If they are computed as usual, they will underestimate the variances of the residuals, leading to overestimated [[bar.R].sup.2] and t-ratios of the coefficients etc. Therefore, using them leads to spurious conclusions. This important finding has given a further boost to the application of time series methods of estimation in applied economic work. The second implication of the Nelson and Plosser finding is that it has raised doubts on the then existing theories to explain economic fluctuations. Keynesian and new classical economists have been modeling economic fluctuations as if they are transitory deviations from a deterministic trend due to demand shocks. If GDP is a unit root variable, then output does not revert to its deterministic trend after a shock. This has lead to the development of the supply shocks based real business cycle theory as an alternative explanation of cycles. However, the Nelson and Plosser findings are still controversial and we shall briefly look into this controversy shortly.

It may be recalled that the main objective of time series econometrics, whether it is the uni-variate BJ type or multi-variate VAR type, is to make better quality forecasts. However, ECM, VECM and SVAR models can also be used to test directly or indirectly the underlying economic theories, although this is not simple. (3) Smith (2006) pointed out that when the editors of the Journal of Econometrics invited readers, in 1995, to name a paper that changed the way we think about some economic theories and propositions, the editors did not receive a reply so far. Therefore, purpose and interpretations do not seem to be attracting as much attention as using alternative econometric techniques to test for unit roots and estimate the cointegrating relationships. This is not to say that the latest developments in techniques are not important. It is just to say that applied economic work is more than endlessly preparing summaries of facts with alternative econometric techniques.

In the following sections we shall examine some widely used alternative techniques in applied economics. What is important to note, at the outset, is that some leading time series experts disagree on their relative merits. A sample of these disagreements, from Smith (2000, p. 242), is as follows:

The strongest proponent of GETS, Hendry has the following to say on Sim's VAR approach:

"Now let's get on to Sims's idea that the Sims ... theorem allows one only to throw all the garbage into a model and leave it hanging around like that and then make inference.... You just get uninterpretable eects."

Commenting further on the concept of cointegration of Engle and Granger (EG) and Johansen's VECM, Hendry says that

"I actually thought cointegration was so blindingly obvious that it was not even worth formalizing it."

Hendry is also critical on the claims that the structural VAR models (SVAR), a development from VAR, can identify, with any degree of accuracy, the underlying structural parameters of the models. He says that

"Where this gets to its most ludicrous, in my view, is using identified VARs trying to interpret shocks. You could have a completely structural model of the economy and the shocks not be structural and never interpretable. I think identification problems occur in that literature because you are trying to identify the unidenti-fiable. Shocks are made up of everything that is missing from the model."

Although Hendry defends the LSE approach against the US VAR and SVAR approaches, the LSE approach is criticized because no formal cointegrating tests are conducted on the variables in the ECM part of GETS. However, subsequently Banarjee et. al. (1993) showed that GETS is similar or better than FMOLS. Furthermore, Ericsson and MacKinnon (2002) have developed various tests, similar to the well known MacKinnon (1991) tests for cointegration in the EG two-step procedure, to test for cointegration between the levels of the variables in the ECM part of GETS.

Now consider what the creator of VAR Sims has to say in Smith (2000, pp. 238-9) on the concept of cointegration and therefore on VECM and by implication GETS:

"The approach of bringing in theory in a casual and naive way, ... is really a bad approach. We see, for example, a cointegrating relationship between M, P, Y, and r, presented and then the possibilities discussed of treating it as a money-demand relation or, if the sign on the coefficient on r is wrong, as a money supply relation. This is really no different from saying, 'We are going to regress p on q or q on p and if the coecient comes out negative we will call it a demand curve and if it turns out positive we will call it a supply curve'. If somebody did that, we would all recognize that this is a fallacy, a naive way to proceed. There is an identification problem, and once there is an identification problem probably a regression of q on p is neither demand nor supply, and if there is a situation where you have a doubt you cannot solve it just by looking at the sign of a coecient. This is just as true in a cointegration relationship as in an ordinary regression. Cointegration analysis is of no help in identification. That is my view."

The alternative methods (EG, GETS, VAR and VECM) are all based on the autoregressive (AR) formulations. Some time series specialists take a skeptical view of their use. Instead they suggest using autoregressive integrated moving average (ARIMA) formulations. Harvey (1997, p. 199), for example, says that

"... many applied economists resort to fitting a vector autoregression. Such a model is usually called a VAR. To many econometricians, VAR stands for 'Very Awful Regression'. (I am indebted to Arnold Zellner for introducing me to this delightful terminology.) ... However, they become a little more respectable if modified in such a way that they can embody cointegration restrictions which reflect long run equilibrium relationships. The vector error correction mechanism (VECM) has been very influential in this respect as it enables the researcher to make use of the procedure devised by Johansen (1988) to test for the number of co-integrating relationships....

"There are a number of reasons why my enthusiasm for VAR-based cointegration methods is somewhat muted. The fundamental objection is that autoregressive approximations can be very poor, even if a large number of parameters is used. One of the consequences is that cointegration tests based on autoregressive models can, like unit root tests, have very poor statistical properties.... However, casting these technical considerations aside, what have economists learnt from fitting such models? The answer is very little. I cannot think of one article which has come up with a co-integrating relationship which we did not know about already from economic theory."

Given such handline methodological positions of the leading econometricians, it is hard to evaluate their relative merits in a non-controversial manner. Consequently, often there are appeals to the authority to defend ones methodological choice.

One fruitful way of looking at the relative merits of these alternatives, with a philosophical persecutive, can be best stated with the following observation by Granger (1997, p. 169).

"... the actual economy appears to be very complicated, partly because it is the aggregation of millions of non-identical, non-independent decision-making units, ... A further practical problem is that the observation period of the data does not necessarily match the decision making periods (temporal aggregation). It can be argued that even though the quantity of data produced by a macroeconomy is quite large, it is still quite insucient to capture all of the complexities of the DGP. The modeling objective has thus to be limited to providing an adequate or satisfactory approximation to the true DGP. Hopefully, as modeling technology improves and data increases, better approximations will be achieved, but actual convergence to the truth is highly unlikely." (my italics)

This is a pragmatic methodological view because it encourages one to keep an open mind on alternative results and more importantly confirms the view that economics is far from an accurate science. However, it is difficult to accept Granger's optimism that better modeling technologies and data would eventually resolve such methodological differences. It may be said that no matter how complicated and latest the techniques are, we may never know the truth about the dynamics of the equations. Let us take an example to justify this pessimism. The Nelson and Plosser finding that many macro variables are non-stationary has changed the way economists have been modeling macroeconomic fluctuations i.e., as temporary deviations from a deterministic trend due to surprise demand shocks. However, there is no unanimity on Nelson and Plosser's findings. For example, Perron (1989) found that out of the 13 variables analyzed by Nelson and Plosser, 10 are stationary. On the other hand Zivot and Andrews (1992) found that only 3 variables are stationary and Lumsdaine and Papell (1997) found 5 variables to be stationary; see Byrne and Perman (2006). Similarly Sen (2006) found that 5 of the Nelson and Plosser variables are stationary. Sen also found that, contrary to the popular Belief, GDP per Capita is stationary in 6 out of 18 OECD countries. It is very hard to justify why a surprise shock Lucas type of model is applicable to these 6 OECD countries, viz., Austria, Belgium, Canada, Denmark, USA and UK and not for the other 12 OECD countries in Sen's sample. It is unlikely that future developments will resolve this issue, as Granger hopes.

In the meantime an applied economist can select any finding that suits his or her modeling stratagey. This is the main problem with any methodological disagreement and therefore it is difficult to believe that economics is a science like physics where disagreements often are on the value of the fifteenth digit after the decimal point. This should also dispel the myth that we can transform economics into a science by using the latest techniques in applied economic work. A pragmatic methodological position is to use alternative techniques to check if they do or do not give similar summaries of facts and prepare alternative scenarios of interpretation for policy. At the end of the day, it is up to the policy makers to select a particular policy advice, hopefully after examining its political and social implications.

From this perspective, the disagreements between the time series experts are less serious. Applied economists may choose a few simple and alternative techniques with which they are comfortable and pay attention to the purpose and interpretation of their applied work. Their interpretations, right or wrong, are always open to refutation. Therefore, in the rest of this paper, we shall illustrate the use of a few popular techniques which are convenient to prepare summaries of data. In this process, we also show that GETS, from an economist's perspective, is a very useful approach to understand some dicult concepts in the Johansen VECM technique. This is not to say that GETS is a superior econometric technique, but to say that it is a useful alternative technique.

THE LSE GETS TECHNIQUE AND ITS USEFULNESS

We shall use the frequently used demand for narrow money (M1) as an example to illustrate the use of the GETS approach because there is little theoretical controversy on its specification. At a basic level it is assumed to depend on a scale variable, often the real GDP, and the opportunity cost of holding money (a perfectly liquid asset) which is the nominal rate of interest on less liquid assets like the treasury bills and other short term deposits. A simple semi log-linear specification, often used in empirical studies, is:

In([M.sub.t]/[P.sub.t]) = [[alpha].sub.0] + [[alpha].sub.1]In[Y.sub.t] - [[alpha].sub.2][R.sub.t] + [[epsilon].sub.t] (1)

where M is narrow money (M1), P is price level (usually the GDP de-flator), Y is real GDP, R is the nominal rate of interest and c is a white noise error term. In other words, the mean of [epsilon] i.e., E([epsilon]) = 0, its variance E([[epsilon].sub.i][[epsilon].sub.j]) = [[sigma].sup.2] when i =j, [[sigma].sup.2] = 0 when i [not equal to] j and [[sigma].sup.2] is a constant and invariant with respect to the sample size. In time series models this white noise assumption about the error term is standard. Sometimes it is said that the error is homoscedastic and exhibits no autocorrelation. (4)

GETS specifications of the demand for money can be obtained in two different ways viz., from the perspectives of the economists and the econometricians. The former is easy to understand and the latter is developed to show that GETS approach is consistent with time series econometrics. From the economist's perspective, a change in the dependent variables takes place if the explanatory variables change. Furthermore, even if there is no change in the explanatory variables, the dependent variable may change in the current period because it did not fully adjust to its equilibrium value in the previous period. This latter effect is the basis for the development of the famous ECM. Based on these notions, equation (1) can be given the following specification:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where the term in the square brackets is the past period deviation from equilibrium or the lagged ECM. Note that [lambda] is the adjustment coefficient and is negative because it is sensible to assume that, in each period, the adjustment process is such that the past deviation from equilibrium is reduced. If real money balances were more than their equilibrium value in period (t - 1), i.e., the term in the square brackets is positive, actual real balances will decrease in the current period. Therefore, the sign of [lambda] should be negative. This is known as the negative feedback mechanism and analogous to the function of a thermostat in a fridge. The absolute value of [lambda] really does not matter. If [lambda] < 1, adjustment towards equilibrium will be smooth. Otherwise, there could be fluctuations and overshooting in the adjustment process.

From the econometrician's perspective, the above specification is too restrictive in its dynamics and may not be consistent with the underlying data generating process. A more general unrestricted dynamic specification is: (5)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

After estimating an unrestricted general dynamic equation, the next step in GETS is to use the variable deletion tests to delete the insignificant lagged variables and obtain a parsimonious equation. This second step is not only time consuming but somewhat arbitrary and the final parsimonious equation need not be unique because it is not independent of the sequence in which insignificant variables are deleted. More recently, Hendry and Krozig (2001) have developed an automatic software, PcGets, to search for parsimonious lag structures. PcGets is especially useful in large samples. (6)

Before we use GETS to estimate a real world demand for money, its usefulness to understand some choices in the Johansen VECM technique is worth examining. Unlike the systems approach of the Johansen VECM, GETS is a single equation method and assumes that the right hand variables are exogenous and the estimated parameters in the ECM are those of the demand for money. That is, there are no identification and endogeneity problems by assumption in GETS. Furthermore, in the VECM there are choices on whether a trend variable and an intercept term should be included and if so whether these two should be restricted to be a part of the cointegrating vector or unrestricted to be a part of the VAR equation. These choices are necessary to get reliable results with the Johansen procedure. GETS is useful to understand these choices. The VAR with an unrestricted intercept and trend in the GETS formulation would as follows. For convenience, we shall ignore the ARDL and the error terms.

[DELTA]In ([M.sub.t]/[P.sub.t]) = [[alpha].sub.0] + [[alpha].sub.00]T - [lambda] [In[M.sub.t-1]/[P.sub.t-1] - ([[alpha].sub.1]In[Y.sub.t-1] - [[alpha].sub.2][R.sub.t-1] + ARDLs (4)

The VAR with restricted intercept and trend would be as follows:

[DELTA]In ([M.sub.t]/[P.sub.t]) = -[lambda] [In[M.sub.t-1]/[P.sub.t-1] - ([[alpha].sub.0] + [[alpha].sub.00]T + [[alpha].sub.1]In[Y.sub.t-1] - [[alpha].sub.2][R.sub.t-1] + ARDLs (5)

Although it is not possible to distinguish between these two specifications by estimating the GETS equations, even with the non-linear least squares (NLLS), these specifications help to understand the nature of choices in the Johansen VECM procedure. However, it is easy to determine whether there should be an intercept or trend in the VAR by estimating the GETS speciation with and without the intercept and trend terms. The only unknown choice is whether the intercept and/or trend should be constrained or unconstrained. For example, if the trend is not significant, but the intercept is, then by first selecting option (3) and then option (2), in the Johansen routine of Microfit of Pesaran and Pesaran (1997), it can be decided whether the intercept should be constrained or unconstrained. These choices will be illustrated shortly.

Similarly, at times it is conceptually hard to understand the identification and endogeneity tests. GETS can be used to explain what is actually behind them, albeit in a somewhat simple manner. This illustration is a bit long, but easy to understand. Once these concepts are well understood, one does not need these long procedures for testing for identification and endogeneity. First, the identification problem. Suppose, by using the Johansen method we found that there is only one cointegrating vector. How do we know that the estimated parameters are those of the demand for money and not in the income or rate of interest equation? For this purpose, let the estimated cointegrating vector be:

[[pi].sub.1]In M/P + [[pi].sub.2]InY + [[pi].sub.3]R (6)

This cointegrating vector can be normalized on any one of the three variables and the corresponding ECM equation can be obtained. Let us denote the ECM when (6) is normalized on money as ECMM. Similarly, ECMY and ECMR can be obtained and these are:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

Now, we can estimate the following three equations by adding the appropriate ARDL terms:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

If [[lambda].sub.1] is negative and significant, then the estimated cointegrating vector (CIV) is the demand for money. Suppose if the CIV is actually a monetarist income equation. If so, either [[lambda].sub.1] would have the wrong positive sign (may be significant or insignificant and that does not matter) or if [[lambda].sub.1] is negative, it would be insignificant.

What about endogeneity? This is easy to understand with the GETS approach. If income, for example, is endogenous i.e., it depends on money, then disequilibrium in the money market should have an effect on income. Let us say that the we have identified the single CIV as the demand for money. Then, when we estimate the income equation with the lagged ECMM i.e., the following equation:

[DELTA]ln[Y.sub.t] = [theta][ECMM.sub.t-1] + ARDLs (9)

if income is endogenous, [theta] will be significant. However, the sign of [theta] does not matter because what we are testing here is only whether disequilibrium in the money market has any effect on income and not in which direction income changes. If in fact income is found to be endogenous with respect to money, one of the implications is that single equation methods of estimation like GETS, FMOLS and EGOLS give biased estimates of the parameters. However, FMOLS claims that it allows for some endogeneity bias.

With this background, it is easy to proceed further and we shall use monthly US data for the period 1974M1 to 1993M12 to estimate the demand for narrow money with alternative methods. (7)

ESTIMATES WITH ALTERNATIVE METHODS

We selected a period to avoid structural breaks in the data and shall not discuss the details of the unit root tests. Application of the standard ADF, Phillips-Perron and KPSS tests showed that output, real money and the nominal rate of interest are unit root variables in levels and stationary in their first differences. Since our objective is to illustrate a convenient methodological approach for applied economic research, we avoid presenting detailed secondary results. In both research and teaching at the University of the South Pacific, we found that it is convenient to start with GETS and FMOLS techniques. Since FMOLS takes into account some endogeneity bias, there will be noticeable differences in the estimated parameters with these two methods when endogeneity is a serious problem. That would call for the use of VECM and/or GETS estimates with the instrumental variables method, say by using the non-linear two stage least squares (NL2SLS) routine in Microfit. The parsimonious GETS estimate of the US demand for money with non-linear least squares (NLLS) in Microfit is as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

[[bar.R].sup.2] = 0.526, SEE = 0.005 Period : 1974M10 - 1993M6

t-values are shown in the parentheses below the coefficients. All the estimated coefficients are significant at the conventional levels. [chi square] summary statistics for serial correlation, functional form misspecification, nonnormality errors and heteroscedasticity are found to be satisfactory but not reported to conserve space. PcGets is used to obtain the optimal lag structure and the equation is reestimated with NLLS in Microfit. Data from 1993M7 to 1993M12 are used to generate ex post forecasts. It may be noted that trend is not included because its coecient is insignificant with a t-ratio of 1.30. This is useful for the selection of options in the Johansen VECM approach.

These GETS estimates of the coefficients in the ECM part imply that the income elasticity of demand for money is 0.660 and the semi interest rate elasticity is -0.028. The speed of adjustment [lambda] at 0.047 per month implies that about 55% of the adjustment towards equilibrium takes place in a year. These are all plausible values. Finally, in order to meet the criticism against GETS that it does not test for cointegration and mixes I(1) level variables with I(0) variables in their first differences, we have used the Ericsson and MacKinnon (2002) K(3) test statistic. The 5% absolute critical value is 3.5057 and the t-rato of [lambda] at 4.77 exceeds this critical value rejecting the null of no cointegration. The only weakness of GETS, like in all other single equation methods of estimation, is the likely endogeneity bias.

In order to get an idea of the possible endogeneity bias, we have estimated this equation with the Phillips-Hansen FMOLS method and obtained the following cointegrating equation: (8)
In(M/P)= 1.643 + 0.609 InY - 0.025 R
 (3.22) (9.37) (8.53) (11)


Although the estimated coefficients in GETS and FMOLS are close, there is a noticeable difference in the estimates of the income elasticity. Therefore, it may be said that there could be some endogeneity bias and it is preferable to estimate the demand for money with the systems method VECM or by using the instrumental variables method in NL2SLS.

However, when the NL2SLS method is used to estimate the GETS specification, there was no significant difference in the estimates of the coefficients and the estimates are sensitive to the choice of instruments. For example, income elasticity ranged from 0.454 to about 0.661. (9) Since we have a large sample, we have used an eighth order VAR. Once again using the insights with GETS estimation, we have first selected option 3 in Microfit, which is the no trend and unrestricted intercept option. This implied that there is no cointegration between these variables. We then selected option 2, which restricts the intercept to be part of the cointegrating vector. The Eigenvalue test, with this option, implied that the null that there is no cointegrating vector could be rejected at the 90% level. The Trace test is more confirmative and the null that there is no cointegrating vector is rejected at the 95% level. The null that there is one cointegrating vector is easily accepted by both the tests. The estimated single cointegrating vector is as follows:

-0.99424 In(M/P) + 0.44094 InY - 0.033959 R + 3.1138 (12)

After estimating the cointegrating vector, it is necessary to identify whether it is the demand for money equation or an equation for output or the rate of interest. Although this identification is a routine matter for experts with VECM, I will use a slightly longer procedure based on GETS. This procedure makes clear what is being tested. If this equation is normalized on money, then we get:

lnM = 3.1319 + .44350lnY - .034156R (13)

Therefore, the implied ECM for money equation is:

ECMM = [(InM - (3.1319 + .44350InY - .034156R)] (14)

Note that ECMM stand for the relevant ECM for money. Similarly, if the CIV is for income or the rate of interest, the implied equilibrium relations and ECMs are:

lnY = -7.0617 + 2.2548lnM + .077015R (15)

ECMY = [(InY - (-7.0617 + 2.2548InM + 0.077015R)] (16)

R = 91.6922 - 29.2772lnM + 12.9845lnY (17)

ECMR = [(R - (91.6922 - 29.2772lnM + 12.9845lnY)] (18)

Now if OLS regressions between the changes in the relevant variable and the appropriate lagged ECMs are estimated, then it can be seen that ECMM is significant and negatively signed in the equation for [DELTA]ln(M/P). In [DELTA]lnY, ECMY is significant but has the wrong positive sign. In the interest rate equation lagged ECMR is negatively signed but insignificant. These results imply that only when our single CIV is interpreted as demand for money, the corresponding lagged ECM has the expected negative feedback eect. Therefore, we conclude that this CIV is the demand for money. These regression results are given below. (11)

[DELTA]In([M.sub.t]/[P.sub.t]) = -0.028383 [ECMM.sub.t-1]

(7.37) Correct sign and significant

[DELTA]ln [Y.sub.t] = 0.00914 [ECMY.sub.t-1]

(4.99) Wrong sign and significant

[DELTA][R.sub.t] = - 0.00469 [ECMR.sub.t-1]

(0.32) Correct sign and insignificant (19)

It is important to note that the VECM estimate implies a much lower income elasticity of 0.4435 compared to above 0.6 with GETS and FMOLS. This is mainly due to the endogeneity bias in the single equation estimates. Neither GETS nor FMOLS seem to have adequately minimized this bias. Let us examine the endogeneity problem. The main dierence between VECM and the single equation approaches is that the latter assume that the right hand side variables are exogenous. In the demand for money, therefore, single equation methods like GETS assume that income and the rate of interest are independent of money. In VECM all the 3 variables are endogenous and since it uses a systems estimation method (maximum likelihood method), it minimizes the endogeneity bias. That there is some endogeneity bias in the GETS and FMOLS estimates is obvious from our results. Although the bias in the estimated coefficient for the rate of interest does not seem to be high, both GETS and FMOLS over estimate the income elasticity. But, how do we test for endogeneity of a variable? Let us examine if income is endogenous and the procedure is the same for testing for the endogeneity of the rate of interest.

If income is truly exogenous, then any disequilibrium in the money equation should not cause a change in income. If lagged disequilibrium in money ([ECMM.sub.t-1]), is significant in the income equation, no matter whether its coefficient is positive or negative, then income is not exogenous. To test this, we simply regress [DELTA]lnY on the lagged ECM of money, i.e., ECMM and the result is as follows:
[DELTA]ln [Y.sub.t] = -0.020601[ECMM.sub.t-1]
 (4.99) significant (20)


It can be seen that ECMM has a significant effect on income and therefore it is an endogenous variable. Only the significance of the coefficient of ECMM--not its sign--is important here. Consequently it can be said that single equation estimates like GETS, EGOLS and FMOLS etc., give biased estimates of the parameters.

Although qualitatively the estimated parameters with GETS, FMOLS and VECM are similar, there is a noticeable difference in the estimated income elasticities. On a purely econometrics arguments VECM estimates are preferable because VECM is based on a more efficient systems method of estimation. This is found to be a valid argument because in the ex post dynamic simulations for the period 1993M7 to 1993M12, mean, mean absolute and root mean squared errors of forecasts with VECM estimates are smaller than those of GETS and FMOLS. This concludes our discussion of alternative estimation techniques.

Although in this particular example it is found that VECM is the most satisfactory technique, in some other instances where we have estimated the demand for money functions with annual data of several developing countries, GETS estimates are found to be very close to the VECM estimates. This may be due to less serious endogeneity bias in these data. Nevertheless, in spite of some limitations, GETS is useful to determine the choice of options and subsequent tests of identification and endogeneity in the more demanding VECM approach. Therefore, in our view a pragmatic methodological approach for the applied economic work with time series methods is to estimate the cointegration equations with both the single and systems equations methods. This approach is particularly useful if there are data limitations and VECM does not yield any meaningful results.

CONCLUSIONS

In this paper we have examined some difficult to resolve methodological issues in selecting alternative time series techniques in applied economic work. In spite of some hardline positions of the proponents of these techniques, it can be said that it would be useful to start with estimating time series equations with the LSE-Hendry GETS technique. Since FMOLS estimates can be obtained with a press of a button in softwares like Microfit, it is also useful to use this method--especially to get some insights into the seriousness of endogeneity problem. No doubt the VECM procedure looks complicated and more demanding. However, we believe that our explanation of how GETS can be used to understand the selection of the estimation options and conduct the identification and endogeneity tests in VECM, makes clear what is behind these procedures in this demanding systems method of estimation.

While these methodological guidelines may satisfy the needs of many applied economists, they may not satisfy those who want to go further and utlise the state of the art techniques based on structural breaks. The literature on structural breaks, especially on the endogenous breaks, is not yet well settled because as there is no agreement on which of the several methods is the best one. While further theoretical developments in testing for structural changes are valuable, it may be noted that Maddala and Kim (1998) take a cautious view about their practical use with the following observation:

"There is a lot of work on testing with unknown switch points. In practice, there is a lot of prior information and there is no reason why we should not use it. For instance, suppose there is a drastic policy change or some major event (for example, oil price shock) that occured at time [t.sub.0]. It does not make sense to ask the question of whether there was a structural change around that period. It is not very meaningful to search for a break over the entire sample period ignoring this prior information." Maddala and Kim (1998, p. 398), our italics.

These observations imply that perhaps testing for unit roots with a priori known dates, e.g. Perron (1989), is more meaningful than the more recent approaches based on endogenous switching points. Joyeux (2006) explains how to deal with structural breaks when the break dates are a priori known and cointegration simultaneously. Needless to say the methodological guidelines of this paper are philosophical in nature and therefore there are likely to be alternative points of view.

REFERENCES

Alagoskoufis, G. and Smith, J. J., (1991), "On Error Correction Models: Specification, Interpretation, Estimation", Journal of Economic Surveys, pp. 97-132.

Banarjee, A., Dolado, J. J., Galbraith, J. W. and Hendry, D.F., (1993), Cointegration, Error-Correction and the Econometric Analysis of Nonstationary Data, Oxford: Oxford University Press.

Byrne, J. P and Perman, R., (2006), "Unit Roots and Structural Breaks: A Survey of the Literature," in Rao, B. B. (ed.) Cointegration for the Applied Economist, Second Edition, Basingstoke: Palgrave.

Engel, R. F. and Granger, C. W. J., (1987), "Cointegration and Error Correction Representation, Estimation and Testing", Econometrica, pp. 251-276.

Ericsson, N. R. and MacKinnon, J. G., (2002), "Distributions of Error Correction Tests for Cointegration," Econometrics Journal, pp. 285318.

Evans, M. K., (2003), Practical Business Forecasting, Oxford: Blackwell Publishers.

EViews 5 (2004), Quantitative Micro Software, Irvine CA. Granger, C.W.J., (1997), "On Modeling the Long run in Applied Economics", Economic Journal, pp. 169-177.

--(2003), "Time Series Analysis, Cointegration and Applications," http://nobelprize.org/economics/ laureates/2003/granger-lecture.html

Harvey, A., (1997), "Trends, Cycles and Autoregressions", Economic Journal, pp. 192-201.

Hendry, D. F. and Krolzig, H. M., (2001), Automatic Econometric Model Selection, Timberlake Consultants Press.

Johansen, S., (1988), "Statistical Analysis of Cointegrating Vectors", Journal of Economics Dynamics and Control, pp. 231-254.

Joyeux, J., (2006), "How to Deal with Structural Breaks in Practical Cointegration Analysis," in Rao, B. B. (ed.) Cointegration for the Applied Economist, Second Edition, Basingstoke: Palgrave.

Lumsdaine, R. L. and Papell, D. H., (1997), "Multiple Trend Breaks and the Unit Root Hypothesis," Review of Economics and Statistics, pp. 212-218.

MacKinnon, J. G., (1991), "Critical Values for Cointegration tests," In R. F. Engle and C. W. J. Granger (eds), Long-run Economic Relationships: Readings in Cointegration, pp. 26776. Oxford: Oxford University Press.

Maddala, G.S. and Kim, I.M. (1998), Unit-Roots, Cointegration and Structural Change, (Cambridge: Cambridge University Press).

Nelson, C.R. and Plosser, C.I. (1982), "Trends and Random Walks in Macroeconomic time Series: Some Evidence and Implications," Journal of Monetary Economics, pp. 139-162.

Perron, P., (1989), "The Great Crash, the Oil Price Shock and the Unit Root Hypothesis," Econometrica, pp. 1361-1401.

Pesaran, M. H. and Shin, Y., (1995), "An Autoregressive Distributed lag Modeling Approach to Cointegration Analysis", in Storm, S., Holly, A. and Diamond, P. (eds.) Centennial Volume of Rangar Frisch, Econometric Society Monograph, Cambridge: Cambridge University Press.

Pesaran, M. H. and Pesaran, B., (1997), Working with Microfit 4.0, (Oxford: Oxford University Press).

Phillips, P.C.B. and Hansen, B. E., (1990), "Statistical Inference in Instrumental Variables Regression with I(1) Processes", Review of Economic Studies, pp. 99-125.

Rao, B.B. and Singh, R., (2006), "Demand for Money in Fiji with Pc Gets", forthcoming, Applied Economics Letters.

Sen, A., (2006), "New Unit-root Test Designed for the trend-break Stationary Alternative: Simulation Evidence and Empirical Applications," in Rao, B. B. (ed.) Cointegration for the Applied Economist, Second Edition, Basingstoke: Palgrave.

Sims, C., (1980), "Macroeconomies and Reality", Econometrica, pp. 1-48.

Smith, R. P., (2000), "Unit Roots and all that: the Impact of time-series Methods on Macroeconomics", in Backhouse, R. E. and Salanti, A. (eds.) Macroeconomics and the Real World, (Oxford: Oxford University Press).

--(2006) "The Significance of Unit Roots and the Pitfalls of Mechanical Statistics," in Rao, B. B. (ed.) Cointegration for the Applied Economist, Second Edition, Basingstoke: Palgrave.

Stock, J. H. and Watson, M. W., (1993), "A Simple Estimator of Cointegrating Vectors in Higher order Integrated Systems," Econometrica, pp.783-820.

Zivot, E. and Andrews, D.W.K., (1992), "Further Evidence on the Great Crash, the Oil Price shock and the unit Root Hypothesis," Journal of Business and Economic Statistics, pp. 251-270.

B. BHASKARA RAO

University of the South Pacific, Suva (Fiji)

NOTES

(1.) Methodological controversies are mainly philosophical in nature i.e., there are no right and wrong answers. Therefore, we generally accept a methodological approach in which we have some (subjective) faith. Consequently, there will be always disagreement on the merits of alternative methodological approaches.

(2.) A typical BJ forecasting equation looks like:

[Y.sub.t] = [[alpha].sub.0] + [[alpha].sub.1][Y.sub.t-1] + [[alpha].sub.2][T.sub.t-2] + ... + [[epsilon].sub.t] + [[theta].sub.1][[epsilon].sub.t-1] + [[theta].sub.2][[epsilon].sub.t-2] + ...

where [[epsilon].sub.t-i] are error terms. Note that there is no other variable and its lags like [X.sub.t], [X.sub.t-1] ... etc. Therefore, BJ equations are uniwariate time series equations.

(3.) For the difficulties in testing theories are Smith (2006).

(4.) In the single equation time series methods based on OLS estimation e.g., in GETS and the EG two-step procedure (EGOLS), it is also assumed that the right hand explanatory variables like Y and R in (1) are not correlated with the error term. We shall discuss this later.

(5.) This specification was a later development in the evolution of GETS and shows that GETS approach can be made consistent with the unit roots and cointegration methods. It can be explained with the following simple example. Let the equilibrium relation between Y and X be:

[X.sub.t] = [[beta].sub.0] + [[beta].sub.t][X.sub.t] (i)

This can be expressed as :

[Y.sub.t] = [[beta].sub.0] + [[beta].sub.1] ([X.sub.t] - [X.sub.t-1]) - [Y.sub.t-1] + [b.sub.1][X.sub.t-1] + [Y.sub.t-1] (ii)

Equation (ii) can be rearranged as :

[DELTA]Yt = -[lambda][[Y.sub.t-1] -([[beta].sub.0] + [[beta].sub.1] [X.sub.t-1])] + [[beta].sub.1] [DELTA][X.sub.t] (iii)

The specification (3) in the text is a more general unrestricted dynamic specification.

(6.) The advantages of using PcGets are demonstrated in Hendry and Krolzig (2001) and Rao and Singh (2006).

(7.) The data used in this example can be obtained by request.

(8.) A window lag length of 24 stabilized the estimates with the Parzen lags.

(9.) In Microfit the maximum values of AIC and SBC are used to select the best order. In other programmes like Eviews 5, the minimum values are used. This depends on the way these tests are computed in the software.

(10.) Strictly speaking one should add to the lagged ECMs the ARDLs and find the parsimonious specification for each equation. What is said above is just a short-cut.
COPYRIGHT 2006 Indian Journal of Economics and Business
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Rao, B. Bhaskara
Publication:Indian Journal of Economics and Business
Article Type:Report
Geographic Code:9INDI
Date:Dec 1, 2006
Words:8175
Previous Article:An analysis of forex market intervention: evidence from India.
Next Article:A future perspective on Indian grain self-sufficiency and its implications for trade.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters