Printer Friendly

Regularity in nonlinear dynamical systems.

Laws of nature have been traditionally thought to express regularities in the systems which they describe, and, via their expression of regularities, to allow us to explain and predict the behavior of these systems. Using the driven simple pendulum as a paradigm, we identify three senses that regularity might have in connection with nonlinear dynamical systems: periodicity, uniqueness, and perturbative stability. Such systems are always regular only in the second of these senses, and that sense is not robust enough to support predictions. We thus illustrate precisely how physical laws in the classical regime of dynamical systems fail to exhibit predictive power.

1 Introduction

2 Regularities in Nature

3 Nonlinear Regularity

3.1 Chaos

3.2 Coexistent Attractors

4 Failures of Regularity

I INTRODUCTION

'I point your attention to the curious incident of the dog in the night.'

'But the dog did nothing in the night.'

That is the curious incident.'

Now that the furor among physicists over nonlinear dynamics has somewhat subsided, a curious incident among philosophers must be reported. The incident in question is the decided lack of attention paid to the phenomena associated with nonlinearity and the behavior of nonlinear dynamical systems.([dagger]) When one contrasts the philosophical response to the upsurge of interest in nonlinearity with the attention philosophers lavish on quantum theory (QM), two initial explanatory hypotheses suggest themselves as accounting for the disparity: (1) philosophers are convinced both by the strong thesis of microreductionism and by the completeness of QM thesis--thus the only philosophically interesting results in physics will be in the quantum realm, and results from, say, classical mechanics can be safely ignored as resting on false presuppositions; or (2) upon inspection, nonlinearity deserves only minor mention, being a mildly interesting kink in classical physics, but not representing something which should cause us to rethink anything fundamental--are there really any vestiges of LaPlacean determinism which remain as the century draws to a close?(1)

Which of these hypotheses is correct? We think the answer is 'Both' and 'Neither', in different respects, of course. Both, since it is true that most philosophical attention is focussed on QM, and it seems equally true that the consensus about nonlinearity is that it has limited import. Neither, in that what is philosophically important about nonlinear behavior has not been sufficiently articulated. It seems clear that the dynamics of nonlinear systems teaches us something about the connection between non-probabilistic laws of nature (and their mathematical expression as evolution equations descriptive of physical systems) and predictability. It also seems clear that what it teaches is that there is no logically necessary connection between what has traditionally been called determinism and predictability. And if this were all, then perhaps enough has been said.

Our present aim is to show that this is not all. One way of suggesting that there exists a lacuna is by noting the use of the term 'chaotic system' in the only two recent works which address the topic of nonlinearity in any detail (Hunt [1987] and Stone [1989]). Both authors claim that 'chaotic systems' illustrate a logical gap between determinism and predictability, via such systems' exhibition of sensitive dependence on initial conditions. This is fine as far as it goes, but is misleading in at least two ways. First, it is not proper to speak of a 'chaotic system'. Systems should be defined by the properties of the laws which govern them, and in systems which do indeed exhibit chaotic behavior that behavior only arises occasionally. Only some parameter values will yield chaos, not others. The class of systems which will exhibit chaotic behavior (among other behaviors) is nonlinear systems. The dependent quantities of such systems interact in a nonlinear fashion; equivalently, such systems are properly modelled by equations which include one or more nonlinear terms.(2) Every example of chaotic behavior arises in a nonlinear system, but not every nonlinear system will exhibit chaos.

The first error leads to a second, more important one: that of conflating at least two distinct ways in which nonlinear systems can be unpredictable. The first (and most widely publicized) way obtains when the system possesses a single chaotic final state. The second way relies on the coexistence of two or more stable final states available to the system for the same set of parameters, where none of the end states need be chaotic. Inherent in the distinction of these two sources(3) of unpredictability is a recognition that nonlinearity deserves a more careful treatment than it has received thus far in the philosophical literature, and it will be our claim that nonlinearity has implications for the concept of a law of nature which have been ignored.

Our central claim is to show that what is most important about nonlinearity is that it teaches us something about the regularities that laws have been assumed to express. By isolating not only which regularities break down in nonlinear dynamical systems, but also exactly how such regularities break down, we come to a clear understanding of two key points: (1) natural laws do entail predictability when coupled with some senses of regularity; (2) the only sense of regularity which is always true of that subset of natural laws describing nonlinear dynamical systems fails to support predictions. The explicit caveat to note here is that our discussion only ranges over the laws governing dynamic time-evolving systems in a quasi-classical limit (quite a large class, including the relativistic and non-relativistic motion of particles, rigid bodies and fluids), and our conclusions must be restricted accordingly. We leave the question of quantum chaos to the more intrepid.

2 REGULARITIES IN NATURE

Two key points must be established before we discuss regularity in nonlinear systems. The first is that it is a widespread and commonplace assumption that laws at least express regularities. The second is that some conception of regularity is in fact indispensable to the concept of law, and is not just part of the untutored penumbra of pre-philosophical discussion about laws to be excluded in the final analysis. We begin with the first.

What is the connection between physical laws and regularities? Consider the following expressions culled from the literature. 'Laws of nature characteristically manifest themselves or issue in regularities' (Armstrong [1983], p. 11). 'A nomothetic explanation we can define as one that appeals to an empirical regularity, or combination of such regularities' (McMullin [1984], p. 209). 'If a certain regularity is observed at all times and all places, without exception, then the regularity is expressed in the form of a "universal law"' (Carnap [1966], p. 3). Such a catalogue of expressions would not be complete without mentioning Hempel's characterization of deductive nomological (DN) explanation as fitting 'the phenomenon to be explained into a pattern of uniformities' ([1966], p. 50).

In addition to these more or less explicit formulations of the relation between laws and regularities, the use of phrases like 'regularities in nature' seems to take for granted that we all are familiar with and understand what is meant by 'regularity'. For example, 'Regularities in the observed events need to be formulated in a generalisation' (Emmet [1984], p. 29); or, 'given that there are regularities, the presence of something that would account for them appears to be called for' (Levine [1986/1987], p. 83); or, 'It is just conceivable that the regularities in the world ... are what Montague calls "an outrageous run of luck"' (Blanshard [1976], p. 253).

Attentive readers may be slightly puzzled at this point, since they will have recognized that we have quoted authors who argue opposing sides of at least two distinct questions: (1) can causality be analysed in terms of regularities? and (2) can laws of nature be analysed in terms of regularities? Although the debates concerning these two questions may be interesting in their own right, we wish to draw attention to an assumption which all parties seem to share, namely that laws indeed express regularities. To those engaged in the debate, the issues have been whether laws or causality can be adequately understood in terms of the regularities which they (the laws) express, or whether recourse to some concept of necessity or an analysis of properties is required, but not whether laws express regularities.

There are as well the two related views that laws explain regularities, and that regularities provide evidence for the existence of laws. By way of illustration, consider briefly a recent cryptic exchange between Armstrong and van Fraasen. Commenting on Armstrong's What is a Law of Nature?, van Fraasen suggests ([1987], pp. 243--60) that one line of argument for the existence of laws of nature draws on a premise asserting the existence of pervasive regularities in nature. By way of reply, Armstrong agrees, and says that this premise is 'very strong' ([1988], pp. 224--9). Armstrong goes on to say that the regularities would be well explained if there are laws of nature. He thus draws a distinction between observable regularities (expressed by empirical generalizations) and relations between properties (expressed by laws), and it is then such relations between universal properties which explain the regularities we observe among objects which exhibit the relevant properties. Consequently (see, for example, Dretske [1977], pp. 246--68), it makes sense to say that observable regularities provide evidence for laws, since laws are not merely summaries of what we observe.

But an observable regularity could only be explained by (and provide evidence for) a law if the law itself expressed some (unobservable) regularity between properties. The law may indeed express more than mere regularity, perhaps nomic necessity, but on this view it must at least express explanatory regularity. So the view that laws are explanations of observable regularities entails that laws themselves express some further sort of regularity.

Why should the regularity assumption figure so prominently in current literature? McMullin ([1984], pp. 205--20) has suggested the alleged importance of nomothetic explanation results from the failure to put to rest the ghost of Hume. Yet, embodied in this suggestion is the assumption that laws of nature express regularities. McMullin is only concerned to point out that the regularities themselves may need explanation, that they by themselves do not explain adequately (contra neo-Humeanism). He is not suggesting that laws do not express regularities; in fact, he presupposes they do. Hume is indeed a large reason why the regularity theory of causation and also (it can be argued) the regularity theory of laws are prevalent, and McMullin's remarks are pertinent here. But the notion that laws express regularities is independent of such theories.

So it seems that a great many thinkers do assume that laws express regularities. But to what does 'regularity' refer in the preceding catalogue? One way of answering this question is to return to the specific context of each usage and ferret out some reference. But every usage catalogued turns out on examination to be at least ambiguous in its application to physical law, since it will turn out there are at least three regularities which laws can express. At best, then, this method will only show us which ambiguities are allowed in the context, and yield the conclusion that philosophers have used the term in seemingly important but ambiguous ways.

This might lead us to reject the concept of regularity. Suppose that a minimal conception of law is that of a true universal expression of regularity (neglecting the well-known problem of accidental generalizations). Why not advocate that philosophers drop the usage of 'regularity' as a hopelessly imprecise term, and simply define laws as true universal propositions (or even as true universals, as Armstrong suggests)? This proposal would have the advantage of avoiding any vagueness or ambiguity associated with regularity. But laws must be something more and other than true universal propositions, and this for at least two reasons.

The first is that 'Things happen' is universally true, as well as one of the pair 'God exists', 'God does not exist', and these, for obvious reasons, will not do as laws of nature. Or rather, if these count as laws, then the concept of law will be useless to science. So laws cannot be only true universal propositions.

The second is that laws do not seem to be universal in the robust sense of that term, i.e. that they apply always and everywhere to everything. In fact, inferring from the best attempts so far to state laws, it appears that most of these fail variously to apply always, everywhere, and to everything. Accounts of the origins of the universe suggest that laws operative here and now are matters of contingent historical circumstance, and that they (these laws) did not hold of the very early history of things; they thus fail to apply always. Laws fail to apply to those things or systems which do not possess the properties referred to in a law; thus laws which have a mass component will fail to apply to massless entities like Moby Dick, and will fail to apply to everything. Finally, but more controversially, not all laws will be invariant under the transformation equations required to render them expressible in a different reference frame; thus laws may not hold everywhere.

The upshot is that abandoning the concept of regularity leaves us with a concept of laws which are just true local propositions about some things and systems, which further express merely the latest stage in the history of these things and systems. This impoverished concept needs supplementation, and it needs supplementation precisely in point of the regularity laws should express. But this lands us squarely at the point of trying to discover what should be the referent of 'regularity'.

Consider, as a paradigm case, the motion of a damped, driven simple pendulum. First, the simple pendulum can exhibit recurrent motion. This recurrent motion is periodic in the following sense: the state or configuration of the system repeats itself after a finite period of time, and continues to do so in the absence of any external disturbance. Mathematically, if the time-dependent state [phi](t) of a system recurs in such a way that [phi](t) = [phi](t + T) then the motion is periodic with period T as time t increases.

Second, the motion of the driven pendulum exhibits a certain unity of succession between earlier and later states; the state of the system at any given time is a definite function of its state at a previous time. This is taken by many to be the defining characteristic of deterministic systems, even of determinism itself. This determinism is the consequence of the condition of uniqueness of solutions to the equation of motion. That is, a single specification of the values of all variables needed to describe the state of the system at time t = 0(t.sub.0) leads, under the time-evolution of the equation of motion, to a unique final state. If we pull the pendulum bob up a certain distance and release it, it will (for small amplitude forcing) swing back and forth about its rest position periodically for all subsequent times.

A third and critically important property to be associated with regularity, and one which can be illustrated by small-amplitude motion of our pendulum, is stability. Consider again the oscillation of the pendulum for small-amplitude forcing, and the periodic solution obtained for the initial conditions chosen. Given that this solution is unique, is it also pathologic, i.e. only obtained by fortuitous choice of initial conditions? If we start the motion at a point slightly higher or lower than before, do we obtain a different motion? The answer for our current case is 'No'. After a brief transient motion which is different, the motion settles into the same periodic pattern as before (which is why it doesn't take a certified quantum mechanic to wind up the grandfather clock). We have thus loosely identified a type of stability.(4)

We should refine our concept of stability a bit before proceeding. We consider the evolution of a set of closely neighboring initial states. The extent of the set is quantifiable by specifying a central state and a distance (in phase space)(5) [epsilon] within which all the neighboring states lie. Then the size (in terms of the distance h([epsilon]) from the time-evolved central state) of the set of corresponding final states obtained is of interest. If the system behaves as our pendulum has so far, then the final distance h([epsilon]) is small and directly proportional to the initial distance [epsilon]: small initial changes produce proportionately small changes in final results. This property of stability with respect to small initial perturbations, or perturbative stability, is true of all linear systems and has traditionally been assumed to hold in the general case.(6)

Three possibilities thus emerge for what might be meant by 'regularity' in connection with physical law:

1. Periodicity--the state of the system is recurrent.

2. Uniqueness--given an initial state, one and only one final state is obtained under evolution of the system.

3. Perturbative stability (linear)--given an initial state with perturbation [epsilon], the time-evolved final state will lie within a volume in phase space of radius h([epsilon]), where h is small and at worst a linearly increasing function of time.

It can readily be seen that predictability, in the sense of being able to determine the future state of a system from the initial conditions and the law governing the system, will be a function of the regularity embodied in the system (and thus the regularity expressed by the law governing the system). If the system exhibits periodicity, then knowledge of the state at any time, plus the period of the motion, will allow us to predict the recurrence of that state at later times. Even if the motion is non-periodic, uniqueness guarantees that we can predict the state of the system at any later time, provided we possess a precise specification of a single initial state. Finally, if small initial changes only produce correspondingly (via a linear function) small final changes, then even this regularity will allow for predictions.

It is useful to recall the thesis of the symmetry of explanation and prediction which was supposed to be a hallmark of scientific explanation on the Deductive-Nomological model. To explain the occurrence of some phenomenon is to subsume it under the laws governing such phenomena, i.e. to exhibit that phenomenon as an instance of some regularity, since on this view laws express regularities. To show the phenomenon in question to be an instance of some regularity is to show that it was to be expected, and further to show that, given the same conditions, it would be expected in the future.

If none of the three senses of regularity obtains, then explanation would afford no basis for prediction. Ignoring the problems with universality mentioned above, suppose we were in possession of a true universal proposition covering dynamical systems which did not express any regularity of these systems. Consider the true universal proposition, 'No laws governing dynamical systems refer to labor unions'. We can 'predict' that at no point in the time evolution of these systems will the quantities involved enter into a collective bargaining agreement. But such a 'prediction' is simply the logical consequence of definition, not a prognosis about how the system will work itself out over time due to the causal interaction among its dependent quantities.

It follows that the relationship between natural law and predictability is mediated by the regularity which a system possesses: a dynamical system via its embodiment of some (one or more) regularity will be susceptible of predictions. Put another way, a dynamical law via its expression of some regularity will entail predictability.

Or will it? As is well known, predictability does break down in nonlinear deterministic systems, and it breaks down in our paradigm case, the pendulum, as we shall demonstrate. Given this result, the question to be asked is 'How does predictability break down in such systems?' Via, or so we will argue, a failure of one or more regularities to obtain.

3 NONLINEAR REGULARITY

Our aim in this section is to exhibit the properties of nonlinear systems which are directly relevant to the regularities which they possess. We continue with the damped, driven simple pendulum--a nonlinear oscillator--as a paradigm, and restrict ourselves to the mathematical model for ease of discussion. Consider a rigid pendulum of length l and mass m, restricted to planar motion. The time-dependent variables are the angle of deflection and the angular velocity, [theta](t), [theta](t) respectively; thus, we have a two-dimensional phase space.(7) By applying Newton's second law, we can derive the equation of motion for the pendulum bob by considering all the forces acting on it. The result is: ml[theta] + c[theta] + mgsin([theta]) = fcos([omega]). (1)

The first term in (1) is the mass m of the bob times the tangential acceleration l[theta], representing the inertia of the pendulum. The second term is the angular velocity [theta] multiplied by the coefficient of friction c, representing the frictional force (damping) at the pivot. The term on the right-hand side is the external time-varying force [tau] of strength f and angular frequency [omega] applied at the pivot to drive it into oscillation. The third term is the nonlinear restoring force on the bob due to gravity, where g is the acceleration due to gravity.

In the absence of an external force, the system exhibits a stable equilibrium state at ([theta](t), [theta](t)) = (0,0); that is, hanging vertically. At this point, the restoring force, proportional to sin([theta]), is zero. The restoring force reaches its maximum value (oppositely directed) at [pi]/2 and 3[pi]/2, and again goes to zero at the unstable equilibrium point at the top of its arc, [theta] = [pi]. The interplay between the nonlinear restoring force and the periodic driving force will, for large enough amplitudes, give rise to a sort of asynchronicity between the oscillations of the pendulum and the driving force; the regular push and pull of the driving force will get out of phase with the oscillations, a situation which would not arise if the restoring force were linear in [theta].

3.I Chaos

If we apply a driving force at a moderate amplitude, the resulting motion is periodic, and its period is that of the driving force. Figure 1 illustrates this process; the attractor (see footnote 5) is a closed, invariant loop for t[right arrow][infinity]. Also in Figure 1, we introduce the Poincare section.(8) In this case, since the motion repeats itself every period ([gamma](t) = [gamma](t + [T.sub.d]), n = 1), there is only one point in the section, and the attractor (in the Poincare plane) is a period-one fixed point. If we increase the forcing amplitude, the familiar transition to chaos will be observed.(9) In the chaotic state, the pendulum alternatively rotates clockwise, then counterclockwise, with no periodicity. The resulting infinite set of points appearing in the Poincare plane (Figure 2) is called a strange attractor.

[CHART OMITTED]

Returning to our discussion of Section 2, we have thus a loss of predictability via a failure of regularity as periodicity for a system in a chaotic state. Regularity as uniqueness can be proven to obtain for a generic class of such systems of differential equations. Not as obvious is the status of regularity as stability. Rather than perform the involved formal stability analysis necessary to prove a loss of local perturbative stability, we simulate a perturbation of the initial state, and follow the evolution.

We perturb an initial state (0,0) by an amount [epsilon], giving us a perturbed initial state (0 -- [epsilon], 0 -- [epsilon]), where [epsilon = 0.0001. We then plot the two initial states and their subsequent motion on the chaotic attractor of Figure 2. Figure 3 shows the results, the initially indistinguishable points being separated to opposite ends of the attractor after only six periods of the driving. While writing a formal analytic h([epsilon]) for this case is impossible, it is possible to quantify this rapid separation of initially arbitrarily close initial conditions by performing the above calculations for many initial conditions ranging over the entire attractor. The result is that h([epsilon]) [nearly equal to] [[epsilon]e.sup.[lambda]t] gives the separation at some later time t, where e is the exponential function, and [lambda] is a constant characterizing the 'strength' of the separation mechanism. This sensitive dependence on initial conditions means that regularity as stability also fails to obtain.

[CHART OMITTED]

3.2 Coexistent Attractors

The set of initial states whose final state under the evolution of the system is a given attractor is called the basin of attraction for the attractor. In general, a nonlinear dynamical system may posses several coexistent attractors; thus it will have many basins of attraction, i.e. regions of phase space which are mapped into one or the other attractor. The boundaries between these basins are of interest when we want to predict the final state of some initial condition, and the precise nature of these boundaries will affect the local stability of the system.

For parameter values of the pendulum c/ml = 0.2, f/ml = 2, g/1 = 1, [omega] = 1, there are two possible attractors; that is, there are two possible stable steadystate final motions. They are both period-1 rotations of the pendulum, one clockwise on average, the other counterclockwise. These attractors are plotted in Figure 4 in both continuous time (note the symmetry) and once every period (points A and B). Which final state the system achieves is governed by the choice of initial conditions: at time [t.sub.0], we are free to start the motion with any value of position and velocity.

[CHART OMITTED]

The naive assumption is that if we start the pendulum rotating clockwise ([theta] < 0), it will end up clockwise (attractor B), and vice versa. If we denote, by plotting a black dot, those initial conditions which, as t[right arrow] [infinity], rotate counterclockwise (thus the dots comprise the basin of attraction for A), and by plotting nothing those which end up rotating clockwise, then our basins of attraction are not the half-planes we might have assumed. Figure 5 illustrates the actual situation. The two basins are intertwined in a non-trivial fashion, making it almost impossible to predict that any given initial condition will result in clockwise or counterclockwise motion. Instead of being a simple line, the boundary between the two basins is itself an object with a fractal dimension d [nearly equal to] 1.7. From any point whose final state is A (except for those points in the regions immediately surrounding each attractor), a small step in any direction will yield a point whose final state is B, and thus will yield a boundary point in between.

We can again quantify this sensitive dependence on initial conditions. h([epsilon]) is now a discrete nonlinear function (again not susceptible of analytic definition for the entire phase space) which takes on either one of two values, 0 or the distance d(A,B) between the two attractors, and which value it has is dependent on both [epsilon] and the initial state itself. Of more use is the following argument due to Macdonald et al. ([1983], p. 125). Consider an initial condition as a point at the center of a disc of radius [epsilon]. If any point within this disc is attracted to a different final state than the 'true' initial state (i.e. if the disc overlaps a basin boundary) then the point representing the 'true' initial state is considered uncertain. We now consider the fraction f of uncertain points over a large region of the phase space. The uncertainty fraction f is proportional to [[epsilon].sup.[alpha]], where [alpha] [less than or equal to] 1.

For our naive assumption, [alpha] = 1, and the uncertainty fraction scales linearly with the initial perturbation, yielding 0 in the limit, with linear convergence. However, for the actual nonlinear system in Figure 5, [alpha] [nearly equal to] = 0.3; thus a tenfold decrease in perturbation magnitude yields only a twofold ([10.sup.0.3]) decrease in uncertainty. Further, because of this exponential behavior (a consequence of the infinitely fine-scale structure), for very small [epsilon] there will remain a finite and non-localized fraction of uncertain initial conditions in the phase space, thus illustrating the failure of local perturbative stability.

[CHART OMITTED]

4 FAILURES OF REGULARITY

As the results in Section 3 indicate, the claim that a law of nature via its expression of some regularity will entail predictability must be modified, for we are in possession of a clear counterexample, a system belonging to a class of systems whose laws admit, in the general case, of only one sense of regularity. Let us briefly recapitulate.

If a system is periodic-regular, it is predictable. But not all candidate systems are periodic, and those which exhibit periodicity do not do so for every choice of parameter values, as is clear from the result depicted in Figure 2. The chaotic result is especially interesting, since it illustrates the generation of aperiodic, complex behavior which is not the result of error, noise, or complex external input. Thus nonlinear dynamical systems are not regular in the sense of exhibiting periodicity.

If a system is perturbative stability-regular it is predictable. But not all candidate systems, it can be seen from the results illustrated in Figures 3 and 5, exhibit perturbative stability. This result, it should be noted, need not depend on chaotic behavior; rather, it is a consequence of the complex structure of the phase space. Thus, nonlinear dynamical systems are not regular in the sense of exhibiting perturbative stability.

The only sense in which nonlinear dynamical systems can be categorically said to be regular is the sense in which there exists a unique path from one point in phase space at [t.sub.0] to another point at time t > [t.sub.0]. For all possible parameter values in our paradigm case, the condition of uniqueness is satisfied. Unfortunately, this regularity simply cannot serve as an enabler of predictions in the absence of one or the other of the alternative types of regularity. Regularity as uniqueness is not robust enough to serve the function required by predictability.

In this regard, consider Hunt's recent brief discussion ([1987], pp. 129--32). Hunt draws a distinction between physical and epistemic determinism. Physical determinism simply says that future states of physical systems are completely determined by their past states. Epistemic determinism says that we can predict future states given a knowledge of past states and the laws governing the system. Hunt correctly points out that a smooth transition from initial to final states is not present in a nonlinear system, and, as a consequence, epistemic determinism fails.

What Hunt fails to point out, however, is that predictability fails for two distinct reasons in nonlinear systems: (1) arbitrarily close points on the same chaotic attractor will diverge over time in an exponential fashion, and this divergence occurs within a single stable state of the system; (2) nonlinear systems may have more than one stable final state, and arbitrarily close initial conditions may diverge to different stable states, and these states may be chaotic or periodic. Both results stem from the complex geometry in the phase space of a nonlinear system, where both single attractors and basin boundaries can be fractals. Hunt conflates these two situations. As we have noted, the consequences for what he calls epistemic determinism are the same, but the consequences for a conception of 'deterministic' law are quite different. What are those consequences?

The assumption that laws express regularities is, of course, true, and only its clarity was ever seriously questioned here. But crucially, as we developed earlier, there are three senses in which it might be true of laws describing dynamical systems. Thus, the further claim that the laws via their expression of regularity enable predictions must be restricted to two disjunctive situations. Laws will enable predictions when: (1) regularity as periodicity obtains; (2) regularity as perturbative stability obtains. The lesson of nonlinearity is that 1 and 2 fail to be true of all dynamical systems, and thus natural laws describing such systems (and hence what has traditionally been referred to as determinism) do not entail predictability.

([dagger])Note added in proof: Since the submission of this manuscript, several noteworthy articles have appeared. The reader is directed to Batterman [1993], Tavakol [1992], and Winnie [1992].

(1)It is not our intent to treat determinism per se, and our occasional use of the term should be read as conventional.

(2)What is nonlinearity? For that matter, what is linearity? The intuitive idea of linearity is captured by the notion of simple proportionality: if a is proportional to b, then we say that a is a linear function f of the variable b; i.e. a = f(b). If we double the input (b), then we double the output (a). The most easily grasped distinction between linear and nonlinear functions is geometrical. Where a = f(b), if a plot of a with respect to b is a straight line, as the independent variable b takes on all possible values in its allowed domain, f is linear; if the plot is any other curve, f is said to be nonlinear. More generally, a nonlinear function (usually appearing as one of many other terms in an equation) is a function which contains a variable raised to a power other than one or zero, or a product of two or more variables, or a variable as the argument of a transcendental function (e.g. sine or cosine). An equation containing one or more of such nonlinear terms is then said to be a nonlinear equation, and the system which inspired the equation is called nonlinear as well.

(3)We do not wish to imply, however, that the two phenomena are unrelated. They are both a consequence of what we might call the complex underlying geometrical structure of the phase space of a nonlinear dynamical system. This structure can be analytically quantified (see, for example, Smale [1963] and Schuster [1989]), and many physically important classes of systems can be shown to possess the same dynamics.

(4)There are many types of stability, and our discussion is strictly limited to that type of local stability of solutions to small perturbations which we identify in the text, which is closely related to Lyapunov stability.

(5)For a concise and illustrative development of the concepts of phase space, trajectories, attractors, and other ideas and terms used in describing dynamical systems, we refer the reader to either Lauterborn and Parlitz [1988], Grebogi et al. [1987], or Schuster [1989].

(6)For the example noted, in fact, the motion obtained is exactly the same, merely phase-shifted with respect to absolute time due to the different duration of initial transient motion. An important practical consideration of the concept of perturbative stability is that of measurement and computational imprecision. Investigation of the evolution of small perturbations is functionally equivalent to a finite error in the specification of the initial conditions, either due to finite measurement capability or finite bits of precision in the representation of an initial condition in a computer simulation. Thus the question of stability could become the question of reproducibility of experimental results.

(7)More precisely, this is a projection in 2 dimensions of the proper 3-D phase space for the corresponding autonomous system. (See Schuster [1989].)

(8)Instead of viewing the trajectory at all times, we choose to view it only at one point in time for each period of the driving force, as if we were turning on a strobe light to view the attractor once every period. If [gamma](t) is the trajectory in continuous time (determined in this case by the pairs [theta](t), [theta](t)), then we look at the discrete points [[gamma].sub.p] = [gamma](t = n[T.sub.b]). where [T.sub.d] is the period of the driving force, for n = 0,1,2,.... The result is a reduction of a periodic trajectory to a fixed point in the Poincare plane. In general, n-periodic motion ([gamma](t) = [gamma](t + n[T.sub.d])) will appear as n points in the Poincare plane.

(9)To glibly pronounce this motion chaotic is to assume that the relevant qualitative and quantitative analysis has been performed on the system and its supposed chaotic trajectory. If the motion were simply random (ergodic), then the entire Poincare plane would be filled uniformly. Instead, successive orbits lie on a twisted, folded banded structure in a non-uniform manner. This structure can be quantified using a number called the fractal dimension (Mandelbrot [1983]). The banded structure upon which the successive section points [[gamma].sub.p](t) fall seems to be more than a line or a curve (a topological dimension of 1); but, since it doesn't fill out the state space, less than a plane (a topological dimension of 2). A calculation of the fractal dimension of the attractor in Figure 2 yields d [nearly equal to] 1.4.

REFERENCES

ARMSTRONG, D. M. [1983]: What is a Law of Nature? New York: Cambridge University Press, p. 11.

ARMSTRONG, D. M. [1988]: 'Reply to van Fraasen', Australasian Journal of Philosophy, LXVI, pp. 224--9.

BARROW, J. D. [1988]: The World Within the World. Oxford: Oxford University Press.

BATTERMAN, ROBERT W. [1993]: 'Defining Chaos', Philosophy of Science, 60, p. 43.

BLANSHARD, B. [1976]: 'Necessity in Causation', in M. Brand (ed.): The Nature of Causation. Urbana: University of Illinois Press, p. 253.

CARNAP, R. [1966]: Philosophical Foundations of Physics. New York: Basic Books, p. 3.

DRETSKE, F. [1977]: 'Laws of Nature', Philosophy of Science, XXXIV, pp. 248--68.

EARMAN, J. [1986]: A Primer on Determinism. Dordrecht: Reidel.

EKELAND, I. [1989]: Mathematics and the Unexpected. Chicago: University of Chicago Press.

EMMET, D. [1984]: The Effectiveness of Causes. London: Macmillan, p. 29.

GLEICK, J. [1987]: Chaos: Making a New Science. New York: Viking.

GREBOGI, C., OTT, E. and YORKE, J. [1987]: 'Chaos, Strange Attractors, and Fractal Basin Boundaries in Nonlinear Dynamics', Science, 238, p. 632.

HEMPEL, C. [1966]: Philosophy of Natural Science. Englewood Cliffs: Prentice-Hall, p. 50.

HUNT, G. M. K. [1987]: 'Determinism, Predictability, and Chaos', Analysis, XXXXVII, 3, pp. 129--32.

LAUTERBORN, W. and PARLITZ, U. [1988]: 'Methods of Chaos Physics and Their Application to Acoustics', J. Acoust. Soc. Am., 84, p. 1975.

LEVINE, M. P. [1986/1987]: 'Mackie's Account of Necessity in Causation', Proceedings of the Aristotelian Society, LXXXVII, p. 83.

MACDONALD, S., GREBOGI, C., OTT, E. and YORKE, J. [1983]: 'Fractal Basin Boundaries', Physica, 17D, p. 125.

MCMULLIN, E. [1984]: 'Two Ideals of Explanation', Midwest Studies in Philosophy, IX, pp. 205--20.

MANDELBROT, B. [1983]: The Fractal Geometry of Nature. New York: W. H. Freeman.

MAY, R. M. [1976]: 'Simple Mathematical Models With Very Complicated Dynamics', Nature, 261, p. 459.

SCHUSTER, H. G. [1989]: Deterministic Chaos. Weinheim: VCH.

SMALE, S. [1963]: 'Diffeomorphisms with Many Periodic Points', in S. S. Cairns (ed.): Differential and Combinatorial Topology. Princeton: Princeton University Press, pp. 63--80.

STONE, M. A. [1989]: 'Chaos, Prediction and LaPlacean Determinism', American Philosophical Quarterly, 26, 2, pp. 123--31.

TAVAKOL, R. K. [1991]: 'Fragility and Deterministic Modelling in the Exact Sciences', British Journal for the Philosophy of Science, 42, p. 147.

VAN FRAASEN, B. C. [1989]: 'Armstrong on Laws and Probabilities', Australian Journal of Philosophy, LXV, 3, pp. 243--60.

WILSON, M. [1989]: 'Critical Notice: John Earman's A Primer on Determinism', Philosophy of Science, 56, pp. 502--32.

WINNIE, JOHN A. [1992]: 'Computable Chaos', Philosophy of Science, 59, p. 263.
COPYRIGHT 1993 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Holt, D. Lynn; Holt, R. Glynn
Publication:The British Journal for the Philosophy of Science
Date:Dec 1, 1993
Words:6594
Previous Article:Causality and temporal order in macroeconomics or why even economists don't know how to get causes from probabilities.
Next Article:The normal and pathological: the concept of a scientific medicine.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters