# Economics: on-line and interactive.

ECONOMICS: ON-LINE AND INTERACTIVE

I. INTRODUCTION

Computers, though radically altering economic research over the last 30 years, have only recently begun to change teaching and have had limited impact on economic reasoning and methodology beyond econometrics and simulation modelling. Computer software now supplements texts with test banks and programmed exercises. Computer text editors permit faster and easier word processing.

In addition, students and others undertake very sophisticated statistical and econometric research projects that involve prewritten, or "canned", programs, such as SAS, SPSS, TSP, or Microsoft to databases. Users follow rote predetermined sets of instructions, and the result is printed output containing statistics, regression coefficients, and the like. The user need not know or understand how or why these particular results are produced by the "black box." This is essentially a non-intellectual activity.

Here I propose use of the computer as an on-line interactive device in which users do all modelling without resorting to prewritten programs. No existing model has been devised by an expert, instructor, publisher, or other modeller. There is, therefore, no black box.

This on-line interactive approach has pedagogical, scientific, and intellectual virtues. First, one must know the economic statistics or econometrics needed to develop the tests or results desired. This avoids bad intellectual habits--reliance on partially understood methods in one's research. Second, one learns precisely how equations, models, and statistics are generated. Students in a classroom or individual researchers, who only partially understand material, cannot escape notice. Prospects of detecting confusion, and thus providing remedial action, increase. This raises the likelihood of accurate results. Third, through repeated application, tools of analysis become part of one's lexicon. In other words, ideas become the authors rather than those of an unknown expert. Fourth, and perhaps most important, each user has the maximum degree of research flexibility and portability. This advantage is especially important in light of the dynamic nature of economic analysis. How many economists have learned a program only to discover new tests or techniques are required to satisfy new research demands and that these techniques are not available on the package?

Thus, users become capable of undertaking intellectually meaningful research--running tests that they choose for their own reasons, rather than following rote procedures of a canned program. The interactive aspect enables users to obtain immediate feedback on original ideas and explore alternatives as they occur. The computer becomes an extension of one's intellect rather than a fast machine.

On-line interactive computing and instruction is made possible by several languages. True Basic, developed at Dartmouth College, is such a language, as I understand are newer versions of Fortran and Gauss. I use APL, "A Programming Language," to teach students to work without written programs. APL is compact, flexible, and well-suited to economic and statistical analysis. Its strength derives from the fact that the unit of operation is the array which may be a matrix, vector, or scalar rather than simply the scalar, as is the case with other languages such as Basic, Fortran IV, Cobol, and Pascal.

Benefits do not come without costs. First, users must learn a language. In the case of APL, portions of the character set are unique so that one must learn new symbols. Second, writing each formula anew while doing research can be tedious and distracting. The virtue of programs is that one can obtain numerous results without much intellectual effort.

In my judgment, programming is worth the extra effort. One can minimize distractions and time by effective use of well-designed languages. Once conversant in programming, one can develop one's own models and programs. The intellectual discipline of constructing one's own models reinforces sound and careful research habits whereas reliance on canned programs forces nonintellectual activity and fosters sloppy habits.

Two examples will illustrate the use of a compact language to interact on-line. First, an aggregate is constructed from individual price and quantity components. This is done in the economic context of constructing nominal GNP from quantity and price indexes of its major components for hypothetical data over a three (3) year period. Next, I deflate GNP figures. Some of the economics that is learned in undertaking this exercise on the computer is then explained. The second example involves matrix inversion and ordinary least squares regression.

II. CONSTRUCTING AND DEFLATING GNP

The following two tables contain hypothetical quantities and prices of components of GNP. (These could represent other sets of quantities and prices, such as sales figures for a multiproduct firm in three different locations or demand figures for three different households.)

We may represent these data by two arrays, Q and P. Q is a 3-by-4 matrix:

Each row of Q represents a year, 1987, 1988, and 1989 in order, and each column represents a component of GNP, consumption, investment, etc. To illustrate, the (2, 3) element, 210, is the quantity of Government Goods produced in 1988.

P is a 4-by-3 matrix:

Each column of P represents a year; each row, a GNP component. Elements of P are prices, so that the (3,3) element of P, .01, is the price of Government Goods in 1989. The problem is first to measure GNP in each year's current prices. GNP measured at current prices can be computed in one step by:

Q+.xP

The +.x operation is obviously matrix multiply. The i-j element of the result is the sum of the products, element by element of row i of the lead matrix Q and column j of the trailing matrix P. Note the logic of matrix multiplication is to multiply corresponding elements, thus x, and add the result, thus +. APL combines these two basic functions, plus and multiply, into one new function, +.x, by use of an operator ".". Obviously, the two arrays Q and P must be conformable because for each result, the number of prices must equal the number of quantities. It is also obvious that the function +.x is not commutative.

The items on the principal diagonal are the GNP figures for each successive year, measured in that years' prices:

One of the implicit lessons of this exercise is that GNP is the product (+.X) of a set of prices, represented by a string called P, and a set of quantities, represented by a string called Q. This is the basis for under-standing that policy on GNP can effect either P or Q or some combination. This is turn leads to the aggregate demand and aggregate supply model which partitions changes in national income into price level and output changes. The physical process of having created the GNP figure from the components P and Q etches this basic idea in students' minds early on.

We may also want to measure GNP over time in each year's prices. In fact these calculations already appear as part of the original +.X operation above as columns. APL in one simple operation calculates GNP in every set of prices. We can now present GNP figures in current and constant prices:

Once students have done these calculations, they have more confidence in the underlying economic concepts because they have, in a very real sense, created the results themselves.

III. MATRIX INVERSION AND REGRESSION

Econometrics involves matrix inversion. The fact that this calculation is very complex has forced most economists to rely on canned programs. In APL, however, matrix inversion is simply. It is performed with a built-in primary function called domino: *

For example, consider the matrix Z: 20 40 40 30 To compute the inverse of Z, enter: *Z -.03 .04 .04 -.02

One may confirm this array as *Z by calculating Z+.x *Z. This will yield the identity matrix. Z+.x *Z 1 0 0 1

Of course, matrix inversion is used in regression analysis. Regression of Y on X involve the following equation: B=[[X' X].sup.-.1]X]Y where X' is transpose X and sup.-.1 refers to the inverse of the expression in square brackets.

Suppose we have 20 observations for a simple consumption model:

Ordinary least squares regression is easy in APL, because * performs regression directly between a lead and trailing argument as follows: C*Y 1.00

The result is the regression coefficients for the problem: C = bY. To estimate the model: C = a + bY, it is easy to attach a vector of ones to Y and call the result a matrix X, and then run: C*X 45.33 .90

One may use the result to obtain predicated values of Y, errors, and so forth very easily. For example, multiple assignments can be made in a single statement. Enter SSE[left arrow]E+.xE[left arrow]C-CH[left arrow]X+.xB[left arrow]C * X to create the B-coefficients, the predicted C values, CH, the errors, E, and the sum of squared errors, SSE, in one step.

If one wishes to run a log regression, then compute natural logs using the primary function * on C and Y, then perform the linear regression shown above in the transformed variables. Polynomials, regression that are nonlinear in the parameters, addition of variables, and new functional forms are simple modifications of the above steps.

Clearly econometric modelling is very easy to do in APL without preconstructed programs. This eliminates errors and poor modelling that can result from restrictions imposed by canned programs. This also permits researchers to develop their own models tailored to their unique needs.

In conclusion, only the tip of the iceberg of one-line and interative capability has been presented here. Much more is possible. The intention here is to illustrate the simplicity and value of this approach to economic research and teaching. These, and possibly the inherent methodology of the discipline, can be altered.

To sum up the main points:

(1) Computers can be useful tools for learning economics, now that one can deal directly with arrayed data.

(2) The start up cost of learning a language is sufficiently low to warrant its use even in principles courses.

(3) Research can proceed quickly without restrictions inherent in built-in programs. Built-in programs are inflexible and often not portable. They can confine users to unsuitable and only partially understood procedures.

(4) In addition, on-line interaction with languages like APL can be used to simulate data and to explore model properties, as well as to study actual data. In my judgment, interactive use with such powerful languages could alter research methodology itself.

I. INTRODUCTION

Computers, though radically altering economic research over the last 30 years, have only recently begun to change teaching and have had limited impact on economic reasoning and methodology beyond econometrics and simulation modelling. Computer software now supplements texts with test banks and programmed exercises. Computer text editors permit faster and easier word processing.

In addition, students and others undertake very sophisticated statistical and econometric research projects that involve prewritten, or "canned", programs, such as SAS, SPSS, TSP, or Microsoft to databases. Users follow rote predetermined sets of instructions, and the result is printed output containing statistics, regression coefficients, and the like. The user need not know or understand how or why these particular results are produced by the "black box." This is essentially a non-intellectual activity.

Here I propose use of the computer as an on-line interactive device in which users do all modelling without resorting to prewritten programs. No existing model has been devised by an expert, instructor, publisher, or other modeller. There is, therefore, no black box.

This on-line interactive approach has pedagogical, scientific, and intellectual virtues. First, one must know the economic statistics or econometrics needed to develop the tests or results desired. This avoids bad intellectual habits--reliance on partially understood methods in one's research. Second, one learns precisely how equations, models, and statistics are generated. Students in a classroom or individual researchers, who only partially understand material, cannot escape notice. Prospects of detecting confusion, and thus providing remedial action, increase. This raises the likelihood of accurate results. Third, through repeated application, tools of analysis become part of one's lexicon. In other words, ideas become the authors rather than those of an unknown expert. Fourth, and perhaps most important, each user has the maximum degree of research flexibility and portability. This advantage is especially important in light of the dynamic nature of economic analysis. How many economists have learned a program only to discover new tests or techniques are required to satisfy new research demands and that these techniques are not available on the package?

Thus, users become capable of undertaking intellectually meaningful research--running tests that they choose for their own reasons, rather than following rote procedures of a canned program. The interactive aspect enables users to obtain immediate feedback on original ideas and explore alternatives as they occur. The computer becomes an extension of one's intellect rather than a fast machine.

On-line interactive computing and instruction is made possible by several languages. True Basic, developed at Dartmouth College, is such a language, as I understand are newer versions of Fortran and Gauss. I use APL, "A Programming Language," to teach students to work without written programs. APL is compact, flexible, and well-suited to economic and statistical analysis. Its strength derives from the fact that the unit of operation is the array which may be a matrix, vector, or scalar rather than simply the scalar, as is the case with other languages such as Basic, Fortran IV, Cobol, and Pascal.

Benefits do not come without costs. First, users must learn a language. In the case of APL, portions of the character set are unique so that one must learn new symbols. Second, writing each formula anew while doing research can be tedious and distracting. The virtue of programs is that one can obtain numerous results without much intellectual effort.

In my judgment, programming is worth the extra effort. One can minimize distractions and time by effective use of well-designed languages. Once conversant in programming, one can develop one's own models and programs. The intellectual discipline of constructing one's own models reinforces sound and careful research habits whereas reliance on canned programs forces nonintellectual activity and fosters sloppy habits.

Two examples will illustrate the use of a compact language to interact on-line. First, an aggregate is constructed from individual price and quantity components. This is done in the economic context of constructing nominal GNP from quantity and price indexes of its major components for hypothetical data over a three (3) year period. Next, I deflate GNP figures. Some of the economics that is learned in undertaking this exercise on the computer is then explained. The second example involves matrix inversion and ordinary least squares regression.

II. CONSTRUCTING AND DEFLATING GNP

The following two tables contain hypothetical quantities and prices of components of GNP. (These could represent other sets of quantities and prices, such as sales figures for a multiproduct firm in three different locations or demand figures for three different households.)

We may represent these data by two arrays, Q and P. Q is a 3-by-4 matrix:

Each row of Q represents a year, 1987, 1988, and 1989 in order, and each column represents a component of GNP, consumption, investment, etc. To illustrate, the (2, 3) element, 210, is the quantity of Government Goods produced in 1988.

P is a 4-by-3 matrix:

Each column of P represents a year; each row, a GNP component. Elements of P are prices, so that the (3,3) element of P, .01, is the price of Government Goods in 1989. The problem is first to measure GNP in each year's current prices. GNP measured at current prices can be computed in one step by:

Q+.xP

The +.x operation is obviously matrix multiply. The i-j element of the result is the sum of the products, element by element of row i of the lead matrix Q and column j of the trailing matrix P. Note the logic of matrix multiplication is to multiply corresponding elements, thus x, and add the result, thus +. APL combines these two basic functions, plus and multiply, into one new function, +.x, by use of an operator ".". Obviously, the two arrays Q and P must be conformable because for each result, the number of prices must equal the number of quantities. It is also obvious that the function +.x is not commutative.

The items on the principal diagonal are the GNP figures for each successive year, measured in that years' prices:

One of the implicit lessons of this exercise is that GNP is the product (+.X) of a set of prices, represented by a string called P, and a set of quantities, represented by a string called Q. This is the basis for under-standing that policy on GNP can effect either P or Q or some combination. This is turn leads to the aggregate demand and aggregate supply model which partitions changes in national income into price level and output changes. The physical process of having created the GNP figure from the components P and Q etches this basic idea in students' minds early on.

We may also want to measure GNP over time in each year's prices. In fact these calculations already appear as part of the original +.X operation above as columns. APL in one simple operation calculates GNP in every set of prices. We can now present GNP figures in current and constant prices:

Once students have done these calculations, they have more confidence in the underlying economic concepts because they have, in a very real sense, created the results themselves.

III. MATRIX INVERSION AND REGRESSION

Econometrics involves matrix inversion. The fact that this calculation is very complex has forced most economists to rely on canned programs. In APL, however, matrix inversion is simply. It is performed with a built-in primary function called domino: *

For example, consider the matrix Z: 20 40 40 30 To compute the inverse of Z, enter: *Z -.03 .04 .04 -.02

One may confirm this array as *Z by calculating Z+.x *Z. This will yield the identity matrix. Z+.x *Z 1 0 0 1

Of course, matrix inversion is used in regression analysis. Regression of Y on X involve the following equation: B=[[X' X].sup.-.1]X]Y where X' is transpose X and sup.-.1 refers to the inverse of the expression in square brackets.

Suppose we have 20 observations for a simple consumption model:

Ordinary least squares regression is easy in APL, because * performs regression directly between a lead and trailing argument as follows: C*Y 1.00

The result is the regression coefficients for the problem: C = bY. To estimate the model: C = a + bY, it is easy to attach a vector of ones to Y and call the result a matrix X, and then run: C*X 45.33 .90

One may use the result to obtain predicated values of Y, errors, and so forth very easily. For example, multiple assignments can be made in a single statement. Enter SSE[left arrow]E+.xE[left arrow]C-CH[left arrow]X+.xB[left arrow]C * X to create the B-coefficients, the predicted C values, CH, the errors, E, and the sum of squared errors, SSE, in one step.

If one wishes to run a log regression, then compute natural logs using the primary function * on C and Y, then perform the linear regression shown above in the transformed variables. Polynomials, regression that are nonlinear in the parameters, addition of variables, and new functional forms are simple modifications of the above steps.

Clearly econometric modelling is very easy to do in APL without preconstructed programs. This eliminates errors and poor modelling that can result from restrictions imposed by canned programs. This also permits researchers to develop their own models tailored to their unique needs.

In conclusion, only the tip of the iceberg of one-line and interative capability has been presented here. Much more is possible. The intention here is to illustrate the simplicity and value of this approach to economic research and teaching. These, and possibly the inherent methodology of the discipline, can be altered.

To sum up the main points:

(1) Computers can be useful tools for learning economics, now that one can deal directly with arrayed data.

(2) The start up cost of learning a language is sufficiently low to warrant its use even in principles courses.

(3) Research can proceed quickly without restrictions inherent in built-in programs. Built-in programs are inflexible and often not portable. They can confine users to unsuitable and only partially understood procedures.

(4) In addition, on-line interaction with languages like APL can be used to simulate data and to explore model properties, as well as to study actual data. In my judgment, interactive use with such powerful languages could alter research methodology itself.

Printer friendly Cite/link Email Feedback | |

Title Annotation: | use of computers in economic research |
---|---|

Author: | Wykoff, Frank C. |

Publication: | Economic Inquiry |

Date: | Jul 1, 1989 |

Words: | 1721 |

Previous Article: | What's wrong here? |

Next Article: | The empirical reliability of monetary aggregates as indicators: 1983-1987. |

Topics: |