Printer Friendly

The reasonable effectiveness of mathematics in economics.

1. Introduction

In 1960 the physicist Eugene Wigner, recipient of the 1962 Nobel Prize in Physics, wrote the now famous paper The Unreasonable Effectiveness of Mathematics in The Natural Sciences. Wigner (1960) argued that the success of mathematics in describing natural phenomena is so extraordinary that it is in itself a phenomenon that calls for explanation. In this paper, we argue that mathematics in economics is reasonably effective and that the reasons why it is reasonably effective deserve an explanation. At the moment of this writing, the world is going through the worst financial and economic crisis since the Great Depression. Many have pointed their fingers at the growing use of mathematical models. We argue that mathematics does not have much to do with the present crisis.

We do so first by discussing the fundamental reasons why it is possible to apply mathematics to economics and then by arguing that the use of mathematics in economics and finance can produce, and indeed has produced, reasonably accurate forecasts. The object of science is to forecast; the mathematical models used in economics can do a reasonably good job of predicting some economic events and can provide information about when it is not possible to make reliable forecasts. In addition, it is possible to outline a research agenda that would improve the science of economics. In any other science, the aforementioned would be considered a satisfactory achievement.

In a nutshell, we believe that the reason that mathematics is only reasonably effective in economics is because we apply mathematics to study large engineered artefacts (i.e., economies or financial markets), that have been designed to allow a lot of freedom so as to encourage change and innovation. The level of unpredictability and control is clearly different when considering systems governed by immutable natural laws as opposed to artefacts constructed by humans. Some systems, such as economies or financial markets, are prone to crises. Mathematics does a reasonably good job in describing these systems. But the mathematics involved is not that of physics: It is the mathematics of learning and complexity.

The reasonable effectiveness of mathematics in economics is important in deciding on how to prepare and train doctoral students in economics. Colander (2007) has written extensively on this issue and he has shown how mathematics has transformed the way economics departments train the next generation. He argues that graduate training in economics has improved since the 1980s by becoming more focused on empirical research and, despite the fact that there is still a strong emphasis on mathematics, there is much more focus on the application to meaningful economic issues than on purely mathematical exercises.

2. Economies and the Natural Sciences

Let's begin by discussing the differences between economics and the physical sciences. In the three centuries following the publication of Newton's Principia in 1687, physics has developed into an axiomatic theory. Physical theories are axiomatic in the sense that the entire theory can be derived through mathematical deduction from a small number of fundamental laws. Physics is not yet completely unified but the different disciplines that make up the body of physics are axiomatic. Even more striking is the fact that physical phenomena can be approximately represented by computational structures, so that physical reality can be mimicked by a computer program.

Though it is clear that economics has made progress and will make additional progress only by adopting the scientific method of empirical science, it should be clear that there are significant differences between economics and physics. We can identify, albeit with some level of arbitrariness, four major differences between economics and the physical sciences:

1. Economics has little possibility of studying simplified subsystems so must study a complex global system.

2. Economics is an empirical science but the ability to make experiments is limited when compared to what is possible with the physical sciences.

3. Economics does not study immutable laws of nature but a human artefact that is subject to change due to human decision-making.

4. Economic systems are self-reflecting: the knowledge accumulated on the system changes the system itself.

One of the major sources of the progress made by physics is due to the ability to isolate elementary subsystems, to come out with laws that apply to these subsystems, and then to recover macroscopic laws by a mathematical process. For example, the study of mechanics was greatly simplified by the study of the material point, a subsystem without structure identified by a small number of continuous variables. After identifying the laws that govern the motion of a material point, the motion of any physical body can be recovered by a process of mathematical integration. Simplifications of this type allow one to both simplify the mathematics and to perform empirical tests in a simplified environment.

In economics, we cannot study idealized subsystems because we cannot identify subsystems with a simplified behavior. This is not to say that attempts have not been made. Microeconomics, as opposed to macroeconomics, attempts to study the behavior of individuals as the elementary units of economic systems. The real problem, however, is that the study of individuals as economic "atoms" cannot produce simple laws because it is the study of a human economic decision-making process which is very complex in itself. In addition, we cannot perform experiments. Instead, we have to rely only on how the only economic system we know develops in itself.

Note that the possibility of studying elementary subsystems does not require the existence of fundamental laws. For example, although the Schrodinger equation of quantum mechanics is indeed a fundamental law, it applies to any system and not only to microscopic entities. Fundamental laws are not necessarily microscopic laws. We might be able to find fundamental laws of economics even if we are unable to isolate elementary subsystems.

There is a strong connection between fundamental laws and the ability to make experiments. By their nature, fundamental laws are very general and can be applied, albeit after difficult mathematical manipulations, to any phenomena. Therefore, after discovering a fundamental law it is generally possible to design experiments to test that same law. In many instances in the history of physics, critical experiments have suggested the rejection of an established theory in favor of a new competing theory.

Our ability to make experiments in economics is limited. Nevertheless, important research in this sector has been carried out. In the 1970s, Daniel Kahneman and Amos Tversky performed groundbreaking research on cognitive biases in decision-making. Vernon Smith studied different types of market organization, in particular, auctions. Kahneman and Smith were jointly awarded the 2002 Nobel Memorial Prize in Economic Sciences for their work (Tversky had passed away in 1996). Research such as that conducted by Kahneman, Tversky, and Smith changed the perspective of economics as an empirical science, though experimental economics does not allow us to design experiments to decide between competing theories as is the case with experimental physics.

Perhaps the most important difference between economics and physics is the fact that economics attempts to determine laws that apply to a specific self-reflective artefact (i.e., economies or financial markets) while physics aims at discovering fundamental physical laws. The level of generality of economics is intrinsically lower than that of physics. In addition, economic systems tend to change in function of the knowledge accumulated, so the object of inquiry is not stable.

As a result of all the above, it is unlikely that the kind of mathematics used in physics is appropriate to the study of economics and financial theories. For example, we cannot expect to find any simple law that might be expressed with a closed formula. Hence, empirical testing cannot be done by comparing the results of closed formulas with experiments but more likely by comparing the results of long calculations. Thus the mathematical description of economic systems was delayed until economists had high-performance computers to perform the requisite large number of calculations.

Nor can we expect a great level of accuracy in our descriptions of economic or financial phenomena. If we want to compare economics to the natural sciences, we have to compare our knowledge of economics with our knowledge of the laws that govern macro systems. While physicists have been able to determine extremely precise laws that govern subsystems such as atoms, their ability to predict macroscopic phenomena such as earthquakes or weather remains quite limited. Parallels between economics and the natural sciences are to be found more in the theory of complex systems than in fundamental physics. In view of the above, it should be clear that the methods of scientific investigation and the findings of economics might be conceptually different from those of the physical sciences. It would likely be a mistake to expect the same type of generalized axiomatic laws in economics that we find in physics.

3. An Historical Perspective on Economics and Finance Theory

As we know them today, economics and finance theory are not unified sciences. They are the result of a process of theoretical evolution over the past two centuries that has produced different disciplines. Economics as a separate discipline developed in the eighteenth century. The birthday of economics as a science can perhaps be placed in 1776 with the publication of Adam Smith's The Wealth of the Nations. During the nineteenth century and the first half of the twentieth century, economists increasingly tried to apply the paradigm of empirical science to economics in the sense that economics was increasingly based on systematic empirical observations. However, the use of mathematics in economics remained marginal. Even a profoundly innovative and enormously influential work such as Keynes's General Theory (Keynes, 1936) makes little use of mathematics.

The first attempt to create mathematical models of the economy dates from the turn of the nineteenth century and is due to Pareto and Walras. The models of Pareto and Walras were well in advance of their time: computers were not available and only closed formulas could be tested against empirical data.

The models of Pareto and Walras are classical models based on the notion of free markets in which a population of economic agents freely decides what to produce, sell, and buy within the limits of their budgetary constraints. The role of markets as the optimal allocator of resources, matching offer and demand, had already been put forward by Adam Smith. Smith compared free markets to an "invisible hand" that coordinates the activity of markets.

At about the same time, a radically different view was put forward by Karl Marx in his Das Kapital. Marx's thinking was influenced by the German philosopher Hegel. Following Hegel, Marx thought that there is a sort of mechanical necessity in historical processes, so that history follows dynamic laws. He arrived at exactly the opposite conclusion of classical economists: if seen from the perspective of the large majority of the population, markets are not optimal allocators of resources but end up producing a highly skewed distribution of wealth. Hence, the economy must, and inevitably will, be planned.

During the first half of the twentieth century, the objective of modeling the economy was put on the backburner. There were, however, significant conceptual developments. The Austrian economist Schumpeter made a perspicacious analysis of competitive markets, and concluded that competitive economies can survive only through a process of continuous innovation. Otherwise, competition eliminates any residual profit. But the most important development of the first half of the twentieth century, given also its practical importance, is the work of John Maynard Keynes. On one side, Keynes created the conceptual framework that enabled the future development of economics, identifying the key variables needed to describe the economy and the flow of savings and investment. On the other side, he partially accepted the view that free markets are not automatically resource optimizers. Hence, there is the need for government intervention in order to stabilize the economy.

Immediately after World War II, computers became a commercial reality. With computers, economists had at their disposal a tool that allowed them to perform approximate computations. They started building large econometric models made of many linear equations that link economic variables. The conceptual framework of these econometric models is essentially an engineering framework: Economic quantities are treated as objective physical variables linked by objective relationships.

One of the first examples of an econometric model is due to the Russian-American economist Wassily Leontief who, in 1942, proposed the input-output model. (1) This quantitative economic model is based on a large matrix that defines the exchanges between different economic sectors (see Leontief, 1986). Seven years later, Leontief built the first working example of an econometric model for programming an input-output model on the Harvard Mark II computer. Other macroeconometric models were subsequently proposed, input-output type models as well as models based on sets of differential equations. The first global macro-econometric model was initiated in 1968 with the Wharton Economic Forecasting Associates LINK project under the direction of Lawrence Klein. This project helped earn Klein the 1980 Nobel Memorial Prize in Economic Sciences.

Paul Samuelson, recipient of the 1970 Nobel Memorial Prize in Economic Sciences, was one of the key actors in the development of modern mathematical economics in the late 1940s and early 1950s. Samuelson was a student of Schumpeter and Leontief at Harvard. Through one of his mentors, the mathematician Edwin Bidwell Wilson, Samuelson was deeply influenced by the work of Josiah Willard Gibbs, founder of thermodynamic chemistry. In analogy to the concept of thermodynamic equilibrium, Samuelson established the principle of comparative statics in economics. Comparative statics had already been introduced by John R. Hicks in his 1939 book Value and Capital. However, Hicks put his mathematical proofs in the appendices while Samuelson "flaunts [mathematics] in the text" as Samuelson's student Stanley Fisher remarked (Fisher, 1987, p. 235). Samuelson's methodological approach was very close to the operationalism introduced in physics by Percy Bridgman, recipient of the 1946 Nobel Prize in Physics. Samuelson used mathematics to unify and clarify the overlappings and fallacies in neoclassical economic theory in his 1947 book Foundations of Economic Analysis. In Foundations, Samuelson shows that almost any economic behavior can be described as a constrained optimization problem. His body of work on the mathematical structures underlying neoclassical theory encouraged others to pursue a mathematical approach to a wide range of economic theories and models. Among them was Robert Merton, Samuleson's doctoral student and a co-recipient of the 1997 Nobel Memorial Prize in Economic Sciences. Merton had initially pursued a doctorate in applied mathematics at Cal Tech.

Three major theoretical developments have occurred since the 1970s:

1. the development of general equilibrium theories;

2. the opening of option markets and later a much broader array of derivatives with the consequent development of the mathematics of derivatives; and

3. the development of econometrics as we know it today.

In the next section, we will discuss these theories that now form the state-of-the-art of economics. In doing so, we will use the term mathematical economics in a sense similar to the use of mathematical physics: mathematical economics is a type of economics where the economic laws are expressed in mathematical terms as a coherent theory. However, we acknowledge that mathematical economics does not use the same logical structure of mathematical physics, but is probably more similar to mathematical biology.

4. A Critical Look at the State of the Art of Economics

Let's first briefly discuss the role of statistics in economics. Uncertainty plays a fundamental role in theories of economics and finance. Not only are we in practice unable to make precise forecasts, we assign an economic value to uncertainty. The classical and still fundamental paradigm for handling uncertainty mathematically in economics and finance is probability theory, though different paradigms such as fuzzy set theory have been proposed.

In the 1920s and 1930s, there were doubts as to the possibility of applying probability theory to economics. It was argued that while statistical estimates require independent samples, economic variables are deeply interrelated. A major clarification came in 1944 from Trygve Haavelmo. The work of Haavelmo, recipient of the 1989 Nobel Memorial Prize in Economic Sciences, ushered in modern econometrics. Haavelmo demonstrated that the application of statistics to economics did not require that empirical samples be independent. We only need to assume, after imposing modeling relationships, that residuals are independent. In modern terms, we view econometric models as "probes" that extract meaningful information from empirical data and leave out only a residual noise.

General equilibrium theories (GETs) were the dominating theoretical development in the last three decades of the twentieth century. The starting point of GET is the role of economic agents. The key principle on which GETs are based is the equilibrium between economic expectations and economic realizations. As expectations change the economy, it is stated, there must be a point of equilibrium where, albeit statistically, expectations and realizations are identical. Let's first outline the development of the theory and then discuss it.

The econometric models prior to the 1970s did not include market agents and their preferences. In 1976, Robert Lucas, recipient of the 1995 Nobel Memorial Prize in Economic Sciences, articulated what has become known as the "Lucas critique." Lucas observed that, without an explicit consideration of economic agents, no model can be used to study the effects of policy changes because models cannot be the same prior to and after a change in policy due to the reaction of economic agents. Following Lucas, it was advocated that macroeconomic models should have a microeconomic foundation, that is, that macroeconomic models should explicitly include market agents.

In 1961, the economist John Muth proposed his rational expectations hypothesis. Muth argued that it is impossible that economic agents are systematically wrong and made the assumption that agents' expectations and the real economic outcomes always coincide. Following Lucas and others, the rational expectations hypothesis became the mainstream paradigm and GET became the mainstream theoretical model of economics.

From the point of view of the scientific method, one should consider a GET a theoretical hypothesis to be validated. A GET is essentially the hypothesis that any economy or market can be described by the maximization of a Hamiltonian functional. As with all variational problems, a GET is a mathematically complex problem which includes an unknown utility function. In itself, any GET is not a highly restrictive hypothesis. Michael Harrison and David Kreps (1979) and David Kreps (1981) demonstrated that every stochastic price process that does not admit arbitrage can be rationalized as a GET. Therefore, in itself GETs do not shed any light on real economic processes. The physical equivalent of a GET would be the claim that particles follow the maximization of a Hamiltonian without any specification of the Hamiltonian; any smooth trajectory would be possible.

The empirical content of a GET is in the specification of the utility functions and of the exogenous variables and constraints. And this is why the empirical success of GETs is very limited. If mathematically tractable utility functions are specified, GETs yield processes that (at most) share some general aspect with real economic processes. There is no possibility of making even an approximate economic forecast using GET.

The pivotal role of GET in modern economics is due to the conceptual appeal of rational expectations. Without embarking on a discussion as to why rational expectations have so much appeal, it is quite obvious that there is no tenable empirical or logical basis behind rational expectations. Economic agents make forecasts based on past experience, they make mistakes, and their mistakes can be systematic. As demonstrated by Harrison and Kreps, one can always rationalize the economic outcome as if it were produced by agents endowed with perfect stochastic forecasts. In other words, even if we assume agents make decisions based on realistic forecasts using past data, the ensuing price processes can still be represented by the maximization of a Hamiltonian. However, the Hamiltonian will be a mathematical construct only remotely related to the eventual utility functions that represent the decision-making process of agents.

GETs are axiomatic theories. Actually they were developed following the model of axiomatic mathematical theories proposed by the Bourbaki Group in Paris. (2) The level of mathematical rigor of GETs is superior even to the level of mathematical rigor of physics. GETs were developed having in mind the logical consistence of a mathematical model but without a parallel preoccupation for the empirical validation of the theory. The extreme mathematical rigor of GET without any solid empirical foundation is partly responsible for the many reactions against the use of mathematics in economics. With the GET paradigm in mind, many felt that mathematics could not explain economics.

The second major development since the 1970s was the development of the mathematics of derivatives. In 1973, Myron Scholes, co-recipient with Robert Merton of the 1997 Nobel Memorial Prize in Economic Sciences, and Fischer Black (deceased before the attribution of the prize) published the first formula for the theoretical pricing of option-type derivatives. Since then, a huge market for derivatives has developed: At their peak in 2008, the total value of derivatives contracts outstanding worldwide was estimated to be in the range of USD 650 trillion with some USD 2 trillion traded daily. As regards the US economy, consider that the notional value of the U.S. derivatives market was USD 176 trillion in 2008, (3) one order of magnitude larger than both the 2008 U.S. GDP which was approximately USD 14 trillion, and the 2008 U.S. stock market capitalization which was also around USD 13 trillion. It is then not surprising that the development of derivatives pricing models became the largest sector in financial modeling.

Though the modeling of derivatives in and of itself does not add much to our understanding of economic phenomena, the creation of a derivatives market whose notional value is an order of magnitude larger than the real economy has significantly changed economic phenomena and the relationship between finance and the real economy. Derivatives can be used for managing risk but also for speculation, with a potentially high level of leverage. For example, the traded value of options is small in comparison with the value of the underlying stocks and the potential losses that can be incurred. Derivatives allow market agents to make bets on the direction of market movements; these bets can be large multiples of the amount invested.

In practice, the presence of derivatives has added to the difficulty of mathematical modeling of the economy. In financial markets, and afortiori in the entire economy, there are now hidden risks due to a web of interacting contracts, often the object of speculation. This risk has been present since the introduction of derivatives, but has reached new heights with the diffusion of derivatives that include a systemic risk, such as credit derivatives. The diffusion of derivatives has become a major macroeconomic phenomenon in itself; as such, it needs to be taken into consideration in macroeconomic modeling.

The third major development in the last three decades of the twentieth century is the emergence of modern econometrics and, subsequently, of the discipline loosely refered to as econophysics. Both modern econometrics and econophysics are based on a combination of economic theory, statistics, and learning theory. Both are also based on the principles of an empirical science: the accumulation of empirical data, the formulation of theoretical hypotheses, and testing. However, the key point that separates the physical sciences and econometrics is "learning". In the physical sciences, fundamental laws are assumed to be the product of some theoretical intuition. There is a component of learning in the physical sciences as parameters are empirically measured. However, due to the size of available samples and the simplicity of laws, learning is effectively measurement. In econometrics and econophysics, in contrast, learning plays a crucial role. Models have a more universal nature in the sense that models can fit any set of data with arbitrary precision with an appropriate choice of parameters. But fitting sample data is no guarantee that the model will also fit out-of-sample data. Actually, the contrary is true. The accuracy of models in fitting sample data has to be constrained in order to capture only general features of the data and improve out-of-sample performance.

Learning theory suggests that, given a sample of data, the complexity of the models we can learn from the data is constrained by the size of the samples. If we have small samples, we can only learn very general models. This conclusion is not specific to economics but is shared by all the sciences: We can only learn from past data and what we can learn is a function of the intrinsic complexity of the process and the size of our data sample.

5. Crises and Economic Theory

A frequent objection to economics as a mathematical science is the fact that economic evolution is driven by single large and unpredictable events. This notion has been popularized by Nassim Taleb, an ex-trader who coined the term "black swans," to indicate exactly this type of unpredictable, large, and generally adverse event. Taleb (2007) suggests that we rationalize black swan-type events after they actually occurred but that rationalizations are, in most cases, artificially constructed; forecasting such an event prior to its occurrence was, Taleb maintains, not possible.

There are two principal reasons why we object to the belief that the unpredictability of crises precludes the use of mathematics in economics. The first is based on the observation that large unpredictable events also exist in other domains such as physics, yet physicists do not abandon the use of mathematics because of the unpredictability of events. The second reason is because our economic systems are designed to allow large areas of unpredictability.

Single unpredictable events exist in the physical sciences as well as in economics, and this at every stage of the science. For example, in the past we witnessed many catastrophic events related to engineered artefacts, such as the collapse of the Tacoma Narrow Bridge in 1940 or the 1954 crash of the Comet airplane. In both cases, it was later discovered that, had we then the science that we have now, these events could have been predicted. Presently, we are unable to predict, and hence control or mitigate the effects of, turbulence, earthquakes or the minimal cell modifications that might cause a terminal disease. But we can identify factors or regions that would make catastrophic events more likely. Thus a plane will avoid flying into a region of turbulence, buildings will be designed to withstand seismic movements, and a person might wish to avoid smoking to reduce the risk of cancer.

However, scientists do not relinquish the mathematical description of physical events because their ability to describe and forecast them fails on some occasions. Science has given up the perfect determinism of eighteenth century mechanics: We now consider that physical laws are probabilistic. Not only are physical laws probabilistic at the microscopic level, but uncertainty carries over at macroscopic levels in complex systems. We accept this type of unpredictability and try to reduce its negative consequences by adopting principles of safe design.

We believe that most black-swan crises in finance were indeed predictable, at least in the sense that their likelihood could have been gauged; it is the design of the system that makes their occurrence more difficult to predict. A first consideration is the availability of data. We cannot make forecasts without the data, not in the physical sciences and not in economics and finance. In financial systems, especially since the 1990s, there are entire market areas open to speculation without any data that describe these areas. For example, hedge funds are not obliged to disclose how they make (or lose) their money, including shorting stocks and borrowing. In the midst of the market turmoil of the summer of 2007, one chief investment officer of equities compared the management of equity portfolios without data on the amount of short selling and leveraging to driving at night on the highway when someone else has their brights on: no one else can see anything.

Other data are missing simply because of the complexity of gathering and analyzing them. In the 1950s, a good mechanic was able to diagnose a car engine by listening to the sound of the engine and test driving the car. Fifty years later, a trained ear and a drive no longer suffice: specialized equipment is used to gather a large amount of data to diagnose the engine. If we consider a higher level of complexity, say a jet engine, the process of gathering and analyzing maintenance data has developed into a separate engineering field where literally thousands of separate inputs are gathered and analyzed to check an aircraft engine. Consider the heroic days of software engineering when Fortran programmers spent sleepless nights analyzing memory dumps. Now sophisticated diagnostic and software engineering tools are, ultimately, the true enablers of modern software development.

Consider that in the field of derivatives there is no system for gathering and analyzing the data comparable with what would be in place for an engineering system of a similar size. With trading and notional values an order of magnitude larger than real economies, one might expect to see a monitoring system for the web of derivatives mutual dependences, including bankruptcies of the counterparties. This system, however, does not exist. Large banks are required only to monitor their own risk with systems of their own design, albeit with a bit of oversight from the Basel Committee on Bank Supervision. There is no coordinated control of the concentration of risk due to the mutual relationships. A physical parallel would be a situation where planes take off, set their route, and land at the order of the pilot without any central control. Observe that, from a scientific point of view, much of the supposed inability of economic and financial modeling to forecast crises is due to the fact that data are neither gathered nor analyzed.

However, even if all the relevant data were known, the study of complex systems has revealed that there are situations that are genuinely unpredictable. There are many sources of non-predictability. Nonlinearities and nonlinear feedbacks are common sources of non-predictability. It is now widely accepted that, in many sectors of modern economies, the law of diminishing returns has to be replaced by the law of increasing returns. Increasing returns are typically nonlinear, create positive feedbacks, and possibly instabilities. The theory of chaos has demonstrated that nonlinear systems can be so sensitive to even small changes as to become totally unpredictable.

Other sources of unpredictability come from aggregation phenomena. The study of percolation networks and random graphs has demonstrated that there are critical probabilities of mutual interaction between the elements of a network. When an interconnected network system approaches critical probabilities, a giant component might suddenly appear, thereby connecting all network components. There is an entire statistical discipline, extreme value theory (EVT), devoted to analyzing and estimating the statistics of rare phenomena.

These aggregation phenomena are the result of the complexity of the system, of the topology of the network, and of mutual interactions. Unpredictable phenomena of this type exist not only in economics but also in complex physical systems, such as fluids and weather.

We now have many tools that allow us to identify what systems are likely to become unpredictable and possibly originate catastrophic behavior. Even if we cannot control unpredictable complex systems, we can in many cases identify them and mitigate the consequences. For example, in the area of economics, our tool set includes the estimation of fat-tailed distributions performed by EVT and the analysis of the expansion of credit and money creation. Had we collected the data, we would also have tools to analyze large networks of interconnected derivatives, similar to those used in analyzing communications and the web.

In the field of engineering, engineers try not to design unpredictable complex systems: The stability of the design is a major preoccupation of designers and rightly so. When complexity cannot be avoided, engineers closely monitor the environment to ensure that we avoid being caught in unpredictable catastrophic phenomena. Economies and financial markets are also engineered artefacts. They could be designed and monitored so to be made safer by collecting more data and appropriate regulation. Actually, as we are now being made painfully aware, they were not so designed or regulated. This brings us to the question as to whether we can predict what type of "system design" we will use in engineering our economies.

It can be argued that, even if we have the tools to avoid risky system design, the really critical question is whether we will use these tools or not. This is a difficult question as ultimately the real catastrophic risk resides in our collective willingness to run catastrophic risks.

Let's reformulate the above. Modern science--whose main mathematical tool is differential equations--is based on the separation between basic fundamental laws and initial and boundary conditions. We consider basic laws as a given universal while initial and boundary conditions can be arbitrary. The principle of reductionism, which is the ultimate goal of the scientific endeavor, is to determine a small set of basic laws from which we can determine any other physical law with a mathematical process. Are initial and boundary conditions completely arbitrary or are they somehow determined within the system of the basic laws?

In the physical sciences, scientists began to investigate this type of problem in the theory of complex systems. One of the objectives of the study of complexity is to investigate if and how physical laws justify the appearance of the dynamic laws of complex systems. Following the tenets of reductionism, it should be possible to explain the dynamics of a complex system (i.e., a system made of many interacting parts) as a mathematical derivation from basic laws. Let's assume that this conjecture is correct: Given initial and boundary conditions, any material object, regardless of its structural complexity can, in principle, be described with the fundamental physical laws. However, this is only a partial answer to the problem of complexity. There is also the question of self-organization: Can we justify the emergence of complex structures using the current laws? Can we justify the emergence and the evolution of complexity with physical laws or do we need additional principles? The answer is not obvious and the problem is still unresolved.

Getting back to economics, the field of mathematical economics is presently not able to explain the deep structural changes in economic systems. We can describe mathematically an economic system if its structure is well-defined but we cannot predict how economic structures will change. These changes are of a social and political nature and eventually include an element of innovation which is very hard to model. For example, it would have been difficult for economic models to predict the emergence of the derivatives markets. These questions go well beyond the understanding of the dynamics of economic systems once they are well-defined.

Are these the ultimate black swans? For example, can we say that the real black swan of the present crisis is the fact that so many facts have been overlooked for so long? Or is it the fact that we cannot predict what type of solutions will be adopted?

Presently, answering these questions is pure speculation. We have the tools to understand and describe economic systems when they are engineered. We have the tools to understand whether economic systems are more or less dangerous but we have neither the tools to foresee innovation nor the tools to understand what systemic choices we will make collectively even in the absence of innovation. But it would be futile to deny the power of descriptive mathematical tools that can influence major choices.

6. Why Mathematics is Reasonably Effective in Economies

Economics is ultimately the study of human economic decision-making processes integrated with the study of complex systems. In aggregate and in simple situations, human decision-making processes exhibit significant regularities, allowing for prediction. But many economic decisionmaking processes are not simple as they are determined by the interplay of many different variables. The outcome therefore becomes hard to predict. The difficulty in studying economics as a mathematical process is not the unpredictability of human behavior per se but the complexity of the interactions. This fact is typical of complex systems: it does not preclude the study of economic systems but suggests the use of the conceptual tools that are being developed in the field of complexity.

In particular, we have presently to rely on learning given that we have not been able to determine fundamental laws validated through experiments. The attempt to define a theoretical paradigm such as GET has, in practice, not produced results given that there is no way, either empirically or theoretically, to determine the Hamiltonian of the system. The crucial point is that in order for an axiomatic theory to be practically meaningful, all its axiomatic principles need to be simple and empirically meaningful. Newtonian dynamics would have been an empty theory if force fields could not have been specified. We need a framework simpler than the GETs, more in line with empirical data.

Estimating models of complex systems requires a lot of data. Learning theory tells us that the ability to determine models of complex systems is constrained by the size of data samples. Ultimately, in economics and finance theory we do not have a lot of data. The real issue is that there are only a few years between any two major structural changes. Apparently, we have a lot of data because we have many individuals. For example, in financial markets we have thousands of time series of prices and returns. However, a large number of individuals create a large number of possible mutual interactions. As a consequence, we can say that in economics and in finance theory, data are scarce given the complexity of the models. The real limitation in estimating dynamic models is the small size of samples along the time direction.

Hence we can only estimate simple models. For example, if we want to understand the evolution of stock market prices using factor models, we can only estimate a handful of factors, less than 10, even if markets are formed by thousands of stocks. Different economic systems and different financial markets are not all equally suitable for mathematical description and the level of predictability is different for different economic contexts.

We believe that mathematical economics is reasonably effective because all these elements can be put in a theoretical mathematical context. We can estimate uncertainty and we can understand if we have sufficient data to make forecasts. The unpredictability of economies is somewhat designed because, either explicitly or implicitly, it is believed that unpredictability breeds opportunities. We abhor total predictability because we believe that it does not leave room for exploiting change and innovation. History offers many examples of political and economic systems that became predictable as they became highly static, incapable of change and innovation, and finally collapsed.

There is a strange fallacy in most discussions on the effectiveness of mathematics in economics. On one side, the inability to make sure predictions is blamed as a major defect of mathematical economics. On the other side, we abhor predictability because we believe it takes out opportunities. But if we design an economic system so that it has many opportunities and many uncertainties, then mathematical modeling can do nothing but register and measure the unpredictability that was designed. This is not to say that mathematical economics cannot be improved. On the contrary, there is much room for improvement. However, many of the achievements are substantial. Let's mention a few of them with a strong empirical bearing.

We can discriminate between economic phenomena that follow benign Gaussian distributions from those that follow fat-tailed distributions. The discrimination is not perfect because of the nature of the problem but we can discriminate and we can estimate if we have enough data to make a reliable estimation. The estimation of fat-tailed distributions has substantially improved our identification of risky events.

We have made important progress in understanding the cyclical behavior of many economic time series. We have learned that the performance of most models is subject to cyclical fluctuations-the autoregressive conditional heteroscedasticity (ARCH) effect identified by Robert Engle. This is a surprising and important phenomenon that seems to be almost universal. Another important discovery of cyclical behavior is cointegration, a conceptual tool to express the fact that two or more time series can individually fluctuate randomly but still remain closely linked so that their relative distance is subject only to cyclical fluctuations. Cointegration was identified by Clive Granger. For their joint discovery of ARCH behavior and cointegration, Engle and Granger were awarded the 2003 Nobel Memorial Prize in Economic Sciences. Both ARCH and cointegration deal with cyclical phenomena. The analysis of cyclical fluctuations has now been extended, with the same abstract mathematical structure, not only to the amplitude of variables but also to the time between observations.

Tools to understand aggregation phenomena, for example clustering and random graphs, have also been developed. This is an important step towards understanding self-organization and the criticalities associated with self-organization. Random graphs and percolation structures exhibit a critical behavior such that, when the probability of interactions between two adjacent nodes is close to a critical value, the entire network becomes connected. This behavior has been exploited to explain market phenomena such as panic sales in financial markets when all market participants make the same investment decisions.

In connection with these tools, a well-articulated theory of learning has now been developed. The trade-offs between model complexity and sample size, as well as how to constrain model complexity in function of the data, are now better understood. New tools, such as random matrix theory to gauge the amount of information genuinely present in a data sample, are starting to be employed.

Economists also have tools to collapse many variables into a small number of important predictors and explanatory variables. The use of hidden variables and factor models has made considerable progress and economists can now determine dynamic factor models even of large numbers of time series.

Economists then have reasonably good tools that allow them to evaluate the level of unpredictability of economic systems and to make forecasts whose quality is in agreement with the estimated level of uncertainty. These tools work when systems are sufficiently stable.

There are many areas where prevailing mathematical models could be improved. However, that improvement will critically depend on the type of economic system design that we adopt. We are now in a time of crisis and we can expect significant changes. Mathematical economics will be more or less effective in function of these changes we will collectively make.

References

Black, Fischer and Myron Scholes, 1973. The pricing of options and corporate liabilities. Journal of Political Economy 81, 637-59.

Colander, David C, 2007. The Making of an Economist, Redux. Princeton: Princeton University Press.

Fischer, Stanley, 1987. Paul Anthony Samuelson. In John Eatwell, Murray Milgate, and Peter Newman (eds.), The New Palgrave: A Dictionary of Economics: Vol. 4. New York: Stockton Press.

Harrison, J. Michael and David M. Kreps, 1979. Martingales and arbitrage in multiperiod securities markets. Journal of Economic Theory 2, 381-408.

Keynes, John Maynard, 1936. The Collected Writings of John Maynard Keynes: The General Theory of Employment Interest and Money, Vol. VII. United Kingdom: Macmillan and St. Martin's Press. Kreps,

Kreps, David M., 1981. Arbitrage and equilibrium in economies with infinitely many commodities. Journal of Mathematical Economics 8, 15-35.

Leontief, Wassily, 1986. Input-Output Economics: Second Edition. New York: Oxford University Press.

Lucas, Robert, 1976. Economic policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy 1: 1946.

Marx, Karl, 1867. Capital: A Critique of Political Economy. New York: International Publisher.

Muth, John F., 1961. Rational expectations and the theory of price movements. Econometrica 29, 315-335.

Ramrattan, Lall and Michael Szenberg, 2007. Gerard Debreu, in International Encyclopedia of Social Sciences, 2nd Edition. London and New York: Macmillan Reference.

Ramrattan, Lall and Michael Szenberg, 2007. Econometrics, in International Encyclopedia of Social Sciences, 2nd Edition. London and New York: Macmillan Reference.

Ramrattan, Lall and Michael Szenberg. 2007. Mathematics in Social Sciences, in International Encyclopedia of Social Sciences, 2nd Edition. London and New York: Macmillan Reference.

Samuelson, Paul A., 1947. Foundations of Economic Analysis. Cambridge: Harvard University Press.

Szenberg, Michael and Lall Ramrattan, and Aron Gottesman (Eds). 2007. Samuelsonian Economics and the Twenty-First Century. United Kingdom: Oxford University Press.

Szenberg, Michael and Lall Ramrattan. 2005. Gerard Debreu: The general equilibrium model (1921-2005) in memoriam. The American Economist 49 (Spring), 3-15.

Taleb, Nassim, 2007. The Black Swan: the Impact of the Highly Improbable. New York, Random House.

Wigner, Eugene, 1960. The unreasonable effectiveness of mathematics in the natural sciences. Communications in Pure and Applied Mathematics 13, 1-14.

Notes

(1.) Jan Tinbergen, a co-winner of the 1969 Nobel Memorial Prize in Economic Sciences, studied mathematics and physics and wrote his dissertation on minimization problems in physics and economics. Tinbergen developed the first econometric model in the 1930s and later developed the first macroeconomic model for the Dutch Government. However, in the 1930s, computers were not available. To our knowledge, Leontief was the first to run a computerized model using one of the first computers available at that time.

(2.) The Bourbaki Group was a group of leading mathematicians in Paris including Jean Dieuedonne, Laurent Schwartz, and Andre Weil, who adopted the collective pseudonym of Nicolas Bourbaki. The Bourbaki Group put the accent on rigorous axiomatization of mathematics.

(3.) As reported in OCC Derivatives Report.

by Sergio M. Focardi, Professor of Finance, EDHEC Business School.

Frank J. Fabozzi, Professor in the Practice of Finance and Becton Fellow, Yale School of Management. Corresponding author: Frank J. Fabozzi, Yale University 135 Prospect Street New Haven, CT 06511 (203) 432-2421, Email address: fabozzi321@aol.com or frank.fabozzi@yale.edu.
COPYRIGHT 2010 Omicron Delta Epsilon
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Focardi, Sergio M.; Fabozzi, Frank J.
Publication:American Economist
Geographic Code:1USA
Date:Mar 22, 2010
Words:7639
Previous Article:Unlearning and discovery.
Next Article:Memorializing John K. Galbraith: a review of his major works, 1908-2006.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters