# Relevant distributions for insurance prices in an arbitrage free equilibrium.

Relevant Distributions for Insurance Prices in an Arbitrage Free
Equilibrium

The increased volatility of economic and financial-risk factors such as inflation, interest rates, investment returns, and exchange rates during the past decade have forced consideration of more financial factors along with underwriting risk factors in insurance pricing models. Moreover, competition in financial services markets have forced insurers to move more into the financial arena, for example, in terms of products tied to investment performance, discounting of loss reserves, actuarial modeling of investment strategies, hedging interest-rate risks, and the internationalization of insurance operations. As a consequence, some of the research in finance and in risk management and insurance have started to converge as noted by Smith (1986), and Buhlmann (1987). Hence, more of the intertemporal models utilized in insurance and actuarial applications involving financial linkages attempt to incorporate the financial concepts of market efficiency and the equilibrium notions underlying competitive market structures (e.g., Kraus and Ross 1982 and Cummins 1988).

The same continuous time, stochastic process models are being used for insurance and asset pricing by scholars in risk management and insurance and by researchers in finance. One reason for this convergence is that insurers have most of their assets in financial instruments (e.g. bonds, stocks and mortgages for life insurance companies and stocks and bonds for property casualty companies) and their liabilities consist of interest sensitive components, such as reserves in both life-health and property-liability insurance which are discounted to a specific valuation date. Consequently, this article analyzes the probabilistic implications of efficiency and equilibrium from the perspective of potential stochastic models pertinent to actuarial calculations or insurance pricing involving financial transactions in an efficient capital market in equilibrium.

Intuitively, an efficient capital market is the manifestation of a market system that works in a cost-effective manner, and the study of efficient markets is a study of the (stochastic) process of price formation, or equivalently of the return generating stochastic process, and the market's adjustment to a sequence of relevant information subsets. However, the primitive notion from finance that "in equilibrium, price efficiency implies that prices reflect all relevant information" is too general to have any practical quantitative applications for actuarial modeling of insurance products affected by financial prices. To quantitatively formalize and model this intuitive notion of an efficient market, scholars in insurance, actuarial science and finance have developed several approaches to describing the stochastic process of prices. Two of these are the traditional "independent increments" or random walk model familiar to actuaries from risk theory, and the more general "fair game" or martingale model.(1)

Some scholars, such as Cummins (1988), Boyle (1977), Black and Scholes (1972), and Boyle and Schwartz (1977), assume that rates of return, for example on stocks or bonds, follow a Brownian motion process. While there is some empirical support for the implied lognormality of the corresponding prices at any fixed point in time, it would be desirable and preferable to complement this with an economically based theoretical argument showing why such continuous time probability models arise as a consequence of basic economic notions. Grossman and Shiller (1982, p. 197) also appeal for even a further basic economic rationale for the Brownian motion models which they use.

The Brownian motion and stochastic calculus models referred to above are widely used in insurance and actuarial research, for example see: Emanuel, Harrison and Taylor (1975), Boyle (1977), Martin-Lof (1986), Cummins (1988), and Sharp (1989). The modeling of stochastic interest rates or the modeling of equilibrium rates of return for insurers are of particular importance, as illustrated by Boyle and Schwartz (1977), Giaccotto (1986), Braun (1977), Cox, Ingersoll and Ross (1981), and Langetieg (1980).

In this article, a justification for some of these models is supplied by showing how the distributional properties relate to the concepts of arbitrage free equilibrium and continuous time trading. The fact that pricing models based on arbitrage arguments have specific distributional implications is not generally well known or understood, and the objective of this article is to clarify further this issue. A mathematical proof is developed to demonstrate that the Brownian motion assumption is, indeed, a valid approximate probability distribution for price distributions for the study of market efficiency.

In the next section, a brief history of the random walk hypothesis of asset prices is presented. Then, the traditional random walk mathematical implementation of the efficient markets theory is evaluated: A complete characterization of probability distributions that satisfy this first primitive "independent increments" model of efficiency is presented. It is shown that, with continuous trading and continuous price changes, only a Brownian motion process for returns (or the lognormal distribution for prices) is possible. Next, the modern and more general fair game definition of efficient markets is examined. Some authors view this definition as equivalent to the no arbitrage condition. It is shown that even in this more general setting, for dynamic temporally efficient markets, asset return movements can be approximated by Brownian motion, and hence prices can be approximated by the lognormal distribution. Concluding remarks are then provided.

Background

The study of distributional implications of the random walk hypothesis can be traced back to Bachelier's (1900) seminal development of arithmetic (or absolute) Brownian motion as well as his pioneering work in the application of this stochastic process to financial data. As elucidated in Samuelson (1972), Bachelier predated Einstein in the development of Brownian motion and, in effect, posited a property of the probability distribution that is presently referred to as the Wiener (or arithmetic) Brownian motion process. Bachelier gives three or four purported proofs that the resulting marginal distributions must be normal. As noted by Samuelson (1972), however, any member of the stable family of distributions also satisfies the properties outlined by Bachelier. Subsequently, there have been numerous authors who have examined the use of stable distributions for equilibrium asset pricing. Bachelier's results, however, are obtained under the implicit assumption of finite variances because the normal distribution is the only member of the stable family with finite variances.

To circumvent the unrealistic characteristics associated with using a Brownian motion model for the price process (e.g., the possibility of negative prices and unlimited liability, Samuelson (1972) proposed replacing the arithmetic Brownian motion process by the geometric Brownian motion process model. With this process, a lognormal distribution of prices is obtained. The lognormal distribution has a long history of use in insurance and economics (e.g., Aitchison and Brown (1957), Boyle (1976) and Hogg and Kluggman (1984)). As noted in Samuelson (1972), the central limit theorem ensures that, in a wide variety of cases, the probability distribution can be approximated for long holding periods (asymptotically) by a lognormal distribution. Furthermore, as noted in Samuelson and Merton (1974) and Merton and Samuelson (1974), one can, with proper accounting of limiting arguments, still use the mean-variance analysis (in logarithmic return units) which is most familiar to researchers in insurance, finance, and economics. Also, as noted in Samuelson (1970 and 1972), lognormal distributions, and hence mean-variance analysis, appear appropriate in the instantaneous case of continuous trading where the holding period goes to zero.

As noted in Merton (1969), if the asset returns have a joint multivariate lognormal distribution, then, even though portfolios of lognormal asset returns are not exactly lognormally distributed for any finite holding period analysis, the process of continuous revisions will leave the distribution of portfolios lognormally distributed. This property (similar to that of the normal distribution property) has been used by Merton (1969, 1970, 1974a, and 1974b), Black and Scholes (1972) and Fischer (1975) in solving portfolio selection problems, market equilibrium analysis, pricing of options and corporate liabilities, and pricing of index bonds in a mean-variance framework. These option pricing models and the general diffusion model approaches they introduced have been extensively applied in insurance pricing, pension funding, and immunization theory (e.g., Cummins (1988), Boyle (1978, 1980, 1987), Doherty and Garven (1986), Tapiero and Jacque (1987), Merton (1977), Sharp (1976), and Wilkie (1987)). The next two sections justify these assumptions for many applications relevant to insurance modeling.

Distributional Implications of the Traditional Independent Increments Definition

In this section it is shown that a primitive implementation of the intuitive notion of an efficient market, namely that returns from an independent increments process (the so-called random walk hypothesis), have specific distributional implications. Fama (1970) calls this the weak form of the efficient markets hypothesis which most academics agree has some empirical validity. The implied distributions include all those models used in the literature to derive financial valuation models in dynamic temporal efficient markets, as illustrated by Merton's jump processes (1976), or the binomial model of Cox, Ross and Rubinstein (1979), or the familiar compound Poisson model from risk theory which Press (1967) uses for stock prices, or the geometric Brownian motion model of Samuelson (1972), and continuous time pricing models in insurance, see: Merton (1977), Kraus and Ross (1982), and Cummins (1988).

Like previous researchers, it is assumed throughout this article that the stochastic process under consideration is stationary. Some form of the stationarity assumption is necessary to analyze data over time (such as the stochastic process of prices or claims) because of the need to use past data to obtain information about the future. If nonstationarity of a known type is present (e.g. known trend or cycles) then a transformation can be made to obtain a stationary process which can be subjected to statistical analysis. If, on the other hand, the process is assumed to be nonstationary and the structure of the nonstationarity is also unknown, so that no transformation to a stationary process is possible, then any analysis is futile. Essentially, in this situation, only a single sample of size one is observed at each time with no model for relating the temporal observations. Thus, the stationarity postulate is necessary for the analysis.

The independent increments formulation for implementing the notion of a stochastic process in an efficient market is the traditional approach and can be formulated mathematically as follows: Let X(t) denote the insurance price at time t, and consider the returns over non-overlapping intervals of time [t.sub.0] < [t.sub.1] < ... < [t.sub.n]. The motivation for this traditional definition is that the return rate variables over disjoint time intervals, e.g., [Mathematical Expression Omitted] should not contain information about each other, and hence should be independent random variables.[2] This implementation of efficiency of pricing in the insurance market amounts to saying that the process lnX(t) is a stochastic process with independent increments.(3)

The assumption of independent increments is one of the earliest and most common procedures used to implement the efficient markets concept in theoretical modeling and it contains the random walk concept often used in risk theory modelling. This assumption has very specific distributional implications, as shown below.

The following stochastic structure theorem (c.f. Gikhman and Skorohod, 1969), which holds for Z(t) = lnX(t), shows precisely the relationship among the random walk, the binomial, the Poisson jump, and Brownian motion models for returns.

Theorem 1: (Structure for Processes with Independent Increments)

If Z(t) is any process with independent increments, then Z(t) can be decomposed as Z(t) = [Z.sub.1] (t) + [Z.sub.2] (t) + [Z.sub.3] (t) where [Z. sub.1] (t) is a process whose sample paths have discontinuities at fixed times, [Z.sub.2] (t) is a Gaussian process with independent increments, and [Z.sub.3] (t) is a [possible limit of a] Compound Poisson Process with jump discontinuities which occur at random times. Moreover, the three processes are independent. The uses for financial modeling of each of these components is discussed below.

The process [Z.sub.1] (t) is a sum of independent random variables with the times of the sample path jumps known in advance. Of course, assuming stationarity in distribution implies that the size of the jumps are identically distributed random variables as well. This corresponds to the random walk hypothesis for (daily, weekly or monthly) prices. If the prices are allowed to change only by the amounts [+ or -] 1 with probalities p and (1 - p) respectively occurring at times h, 2h, 3h, . . . , then the binomial jump model of Cox, Ross and Rubinstein (1979) is obtained.

For many insurance models (eg. risk processes), the assumption of stochastic continuity is perhaps a more realistic model than the fixed time jump process [Z.sub.1] (t). Stochastic continuity of prices says that no jumps in the price can be anticipated in advance. This is the same as stating Pr[Z(.) is discontinuous at [t.sub.0]] = 0, for all [t.sub.0], or equivalently that P - lim t [right arrow] [t.sub.0] Z(t) = Z([t.sub.0]). Under this reasonable assumption, the component [Z.sub.1] (t) = 0, i.e., there are no fixed points of discontinuity. However, this does not rule out discontinuous sample paths. Indeed the Poisson process used in claims modelling of insurer operations (c.f., Panjer and Willmot 1989), and the stable processes used in insurance loss processes (c.f. Paulson and Faris 1985) both have purely discontinuous (i.e. step) sample paths, but the exact time at which a jump will occur is not known in advance of its occurrence. Both of the two component processes [Z.sub.2] (t) and [Z.sub.3] (t) are stochastically continuous. It has been argued (c.f. Cambell 1980) that this is most appropriate for many life insurance stochastic processes.

The process [Z.sub.2] (t) is a generalized Brownian motion; it has independent increments, and for fixed t, the distribution of [Z.sub.2] (t) is normal with mean [Mu](t) and variance [[Sigma].sup.2] (t). Moreover, the function [[Sigma].sup.2] (t) is an increasing function of t, and the sample paths of [Z.sub.2] (t) are not only continuous in probability but are actually continuous with probability one. If the original process Z is stationary, then [Mu] (t) = t [Mu] and = [[Sigma].sup.2] (t) = t [[Sigma].sup.2] for fixed numerical constants [Mu] and [[Sigma].sup.2], and [Z.sub.2] is ordinary Brownian motion. In the context of modeling the rates of returns on investments or the underwriting rate of return, this corresponds to prices X(t) following a geometric Brownian motion process as postulated by Samuelson (1972). In fact, the assumption of continuous sample paths, and independent increments for the returns lnX(t) is necessary and sufficient for the prices X(t) to follow a generalized geometric Brownian motion. If the original implementation of the definition of market efficiency is used (e.g. independent increments), and if prices are assumed to change continuously in time, then it is a conclusion that prices are lognormal; in fact the entire process is generalized Brownian motion (See Yeh (1973) Chapter 3 for a proof of this result). Note that this same argument applies to the pricing of bonds whose use for interest rate calculations are so fundamental in insurance.

The final component in the structure theorem, [Z.sub.3] (t), is also stochastically continuous, but every sample path is a step function. The times of these jump discontinuity steps are random, however, and cannot be predicted with certainty prior to observation. It can be shown that the process [Z.sub.3] (t) is (perhaps a limit of) a compound Poisson process. To illustrate this, suppose N(t) is a Poisson process with intensity (mean) [Lambda] (t) up to time t, and that [X.sub.1], [X.sub.2], . . . are independent and identically distributed, then [Mathematical Expression Omitted] in distribution where C is constant. In the case where N(t) denotes the number of (jump) price changes up to time t, and [X.sub.i] denotes the size of the [i.sup.th] price change, this becomes an often used formulation for prices which follow a random walk, but in which the number of transactions (trading volume) involved per period is also explicitly acknowledged to be a random variable. These compound Poisson stochastic processes provide the foundation for traditional collective risk theory models and financial ruin theory in actuarial science (c.f., Bowers, et. al. 1986) and have numerous applications in insurance theory (c,f, Panjer and Willmot (1989)).(4) They have purely discontinuous sample paths. Merton's (1976) jump process model for asset prices is also a special case of the above development, as is the Press (1967) model for stock prices, and most risk theory processes.(5)

In conclusion, the traditional formulation of market efficiency using independent increment stochastic processes has definite distributional implications. In particular, the only stationary return process with independent increments that allows for continuous price changes is the Brownian Motion process. This return process goes a long way towards providing a justification of the geometric Brownian motion (lognormal) model that has been postulated in many continuous time situations (c.f. Black and Scholes (1973), Cummins (1988), Samuelson (1970)). Moreover, it also gives theoretical justification for the empirically observed good fit of the lognormal distribution to security price data (e.g. Cootner (1974), Lintner (1972), and Blattberg and Gonedes (1974)).

A Second Weaker "Fair Game" Definition of Market Efficiency and Its Distributional Implications

The independent increments definition of efficiency is the older and much more restrictive (perhaps even misleading) approach to modeling efficiency. The most succinct and general definition of market efficiency is the lack of predictable arbitrage possibilities. Samuelson's fair game model (1965 and 1974) formalizes this intuitive definition by arguing that appropriately discounted successive prices should form a martingale stochastic process. This is the more modern approach to implementation of these intuitive notions. In fact, Ross (1987) and others believe the martingale model formalizes the definition of the no arbitrage condition(6). Much of the actuarial-risk model literature is derived using a martingale method (c.f. Gerber 1979 and de Vylder 1977).

To be mathematically specific, let X(t) denote the price of an asset at time t. Then, to properly adjust for long term inflationary trends, riskless market returns, underwriting cycles, and other economic factors, all prices are discounted to a common comparison date, say t = 0. Accordingly, Samuelson lets Y(t) = exp{ - [Delta] (t)}X(t) denote the properly discounted price. If seasonal or cyclical effects are present (as in certain insurance lines), they are assumed to be incorporated into the adjusted prices Y(0), Y(1), . . . through the discount factor [Delta] (t). Roughly speaking, Samuelson's argument in support of a martingale formulation for equilibrium prices in an efficient market is as follows. If the return quotient [Mathematical Expression Omitted] was expected to be positive at some future time t + u given the information available about the prices up to time t, then buyers of insurance would flock to the market at time t to take advantage of the opportunity to insure their risk at a lower price. The demand for the coverage at time t would go up and the prices would rise until the perceived excess return disappeared. Similarly if the expected value of the return quotient was negative, then market participants would tend to forgo insurance for that length of time while waiting for a lower price. This in turn would drive the price at time t down until the perceived price inequity disappeared.

Formulated mathematically, the above argument says that the equilibrium price in an efficient market should satisfy [Mathematical Expression Omitted], where [Mathematical Expression Omitted] is the logarithm of the return on the assetat time t. This may be stated equivalently in terms of the rate of return process as [Mathematical Expresson Omitted]. This is Samuelson's celebrated fair game or martingale property of asset returns; that is 1nY(t) is a martingale.(7) For more formal development of the martingale consequences of efficiency; see Samuelson (1972 and 1974).

It is now shown that the geometric Brownian motion process model for dynamic prices is a consequence of the fair game (martingale) representation of prices in a temporally efficient markets and not just an assumption made for computation purposes. It follows that those who believe in efficient markets (as a fair game model) should also believe that geometric Brownian motion models provide reasonable approximations for efficient markets.

As noted above, formalizing the notion of market efficiency in terms of a fair game model implies that the stochastic process lnY(t)t = 1,2, . . . forms a martingale sequence, where Y(t) = exp{-[Delta](t)}X(t) is the properly discounted present value at time t. Here t might refer to days, weeks, months, or any sufficiently non-zero length of time. The requisite that t not be too close to zero in magnitude is made so that the physical trading process in the case of equity or bond market prices or competitive pricing reactions in the case of insurance contracts have sufficient time to respond to unbalanced expectations and force the martingale property to hold.

In addition to the martingale property of lnY(t), which follows from the fair game formulation of market efficiency, the two other specifications on the process [Mathematical Expression Omitted] are included. These two other specifications are intuitively consistent with the unpredictability notion of a competitive and efficient market. First, assume [Epsilon](t) is stationary. As noted before, this should be the case if there are no anticipated structural changes occurring in the returns for which investors can adjust.(8) It might also be noted that this assumption is very common in the finance, economics and insurance literature, because it is natural to attempt to use past data to fit unknown parameters in a stochastic model. The stationarity assumption helps us to obtain the necessary parameter estimates.

The final assumption required is that [Epsilon](t) be an ergodic process.(9) Essentially this means that time averages and state space averages coincide. The assumption of ergodicity has two desirable properties. First, since one can only observe a single sample path of the stochastic price process in prior and current economic states of nature, ergodicity allows us to replace space averages (averages over possible states of nature for a future point in time) by time averages (averages of a single state of nature over past points in time). Second, ergodicity implies that the limiting equilibrium distribution is independent of the starting point of the stochastic process. If the limiting distribution is viewed as characterizing stochastic equilibrium, the ergodic property extends the notion of deterministic equilibrium, which is independent to the starting point, to stochastic equilibrium.

Ergodicity is tacitly assumed by most authors doing empirical research involving equity and bond prices because they estimate, say, the average price in some future potential state of the economy, by the using the average of past prices (which reflect just one state of the world at each point observed over time). Obviously only a single realization of the sample path [Epsilon](t,[w.sub.1]) can be observed (namely that corresponding to the state of nature or state of the economy [w.sub.1] at each point in time). However, any inference concerning expected values at a future time t + u involves averaging over the ensemble of possible economic states of nature {[[Omega].sub.1]}. Thus, this inference must be based on averages of the single sample path which was observed over time. The assumption that the past values of the single sample path [Epsilon](t, [[Omega].sub.1] over time give information about expected values of [Epsilon](t+u) over w requires ergodicity.

Although some authors, (e.g., Lucas, (1978)) assume that asset prices form a stationary Markov process, it can be shown that a stationary Markov chain with positive transition probabilities is ergodic. Thus, this assumption is also a common one in theoretical developments as well. However, it should be noted that the stationary ergodic assumption made in this article is weaker or more general than those made by many other scholars conducting empirical and theoretical work. The precise form of stationary ergodic process governing the discounted prices e.g., the ARMA process as used by Panjer and Bellhouse (1980), Bellhouse and Panjer (1981) or the Ito process used by Cummins (1988) and others is not specified in advance.

It is interesting to observe that just the assumptions of stationarity and ergodicity by themselves firmly embody the intuitive interpretation that sequences of past prices cannot exhibit statistically exploitable patterns. The following theorem from the literature of engineering statistical communications can be used to show that in the stationary ergodic world assumed by most authors, the process of technical analysis is essentially assumed to be futile.

Theorem 2: (McMillian's Equipartition Theorem)

Let [V.sub.1], [V.sub.2], . . . be a stationary ergodic process with a finite number K of possible values for each [V.sub.i]. Then there is a number H (called the entropy of the process) such that for any given number [Epsilon] > 0 and for n sufficiently large, the set of all [K.sub.n] possible sequences of observed values ([v.sub.1],[v.sub.2], . . ., [v.sub.n]) is decomposable into the disjoint union of two sets [A.sub.n] and [B.sub.n]. These two sets satisfy the following:

[Mathematical Expression Omitted]

and for each [Mathematical Expression Omitted]

exp [Mathematical Expression Omitted]

Intuitively, McMillian's Equipartition Theorem indicates that, except for a set of possible sequences of length n which has only a very small chance of occurring (probability less than [Epsilon]), all possible sequences of length n are essentially equally likely to be observed, that is, the probability of each sequence being about exp {-n H}). In the context of prices, this says that with a very high probability, in terms of likelihood, a particular price of return sequence of length n with which nature has presented and which has actually been observed, is essentially interchangeable with almost any other length n sequence. Of course the other length n sequences not observed could contain quite different patterns than the pattern displayed by the observed data. The unpredictability of asset prices and the futility of technical analysis (pattern recognition techniques) in a stationary ergodic asset market are thus drawn as a conclusion from the stationarity and ergodic assumptions which underlie most of the literature involving an efficient market.. A similar conclusion applies to insurance pricing models whose foundation rests on these two postulates.

Until now, nothing has been said about the nature of the statistical distribution of the rate of return process. By including the economic assumption that there is no arbitrage opportunity in the pricing of contracts over time (as implemented by Samuelson's martingale assumption) the following very strong approximation theorem for price distributions can be obtained.

Theorem 3:

Let X(t) denote the price and Y(t) = exp{-[Delta](t)}X(t) the properly discounted present value of X(t). If the market is efficient in the sense that the return process [Mathematical Expression Omitted] is a square integrable, stationary, ergodic process, and lnY(t) has the martingale property then X(t) is approximately lognormal. Moreover the entire price process [Mathematical Expression Omitted] is simultaneously approximated by a geometric Gaussian process over the entire time interval.

Proof:

The proof follows from a theorem of Billingsly and Ibragimov on stationary ergodic processes with martingale differences, see: Billingsly (1968) Theorem 23.1. Their theorem states the following:

Let [Mathematical Expression Omitted] be a stationary ergodic process for which [Mathematical Expression Omitted] and let [Mathematical Expression omitted]. Define the Donskerlike partial sums by simply linearly connecting the points, [S.sub.1.], [S.sub.2], . . . Mathematically this connected curve may be written as [S.sub.[nt]]([Omega]) = [Mathematical Expression Omitted]. Then the stochastic process [Mathematical Expression Omitted] converges weakly as a stochastic process to Brownian motion. That is all the finite dimensional distributions of [Mathematical Expression Omitted] converge to the corresponding finite dimensional distributions for Brownian Motion.

Returning to the determination of the price process in an arbitrage-free and efficient-market framework, [Mathematical Expression Omitted] in the Billingsly-Ibragimov theorem can be used. The arbitrage-free, efficient-market hypothesis implies [lnY.sub.n] is a martingale, and hence [Mathematical Expression Omitted]. By the multiplicative property of the return process, [S.sub.k] can be calculated as follows:

[Mathematical Expression Omitted]. It then follows from the above quoted theorem of Billingsly and Ibragimov that converges [S.sub.[nt]] to Brownian motion.

An examination of the stochastic process [S.sub.[nt]] in the above framework should be helpful. For ease of illustration, take Y(0) = 1. The sum [S.sub.[nt]] is like a dependent random walk: [S.sub.[nt]] takes on the values [S.sub.1], [S.sub.2], [S.sub.3], . . . at the points [Mathematical Expression Omitted]... and is linearly connected in between these points, as shown in Figure 1.

An appropriate dependent martingale central limit theorem might assert that (when approximately normalized) the partial sum [S.sub.n] = lnY(n) would be approximately normal (and hence justify the discrete lognormal models used, for example, by Doherty and Garven (1986))(10), but the Billingsly-Ibragimov Theorem says much more! The convergence result in Theorem 3 asserts that the entire path of the dependent random walk lnY(n) = [S.sub.n] during the first n-steps is distributed approximately as the entire path up to t = 1 for a particle under Brownian motion. This contains information going far beyond the central limit theorem. Qualitatively, [Mathematical Expression Omitted] converging to Brownian motion says that if t is small, then a return process subject to displacements of size [Epsilon](1), [Epsilon](2) . . ., at successive times [Tau], 2[Tau], . . . will when viewed from afar, appear to perform approximately a Brownian motion. Since [Mathematical Expression Omitted] and [Mathematical Expression Omitted]in the case of insurance contract or bond prices considered in this article, the theorem indicates that lnY(k) is approximately a Brownian motion, or upon exponentiation, the discounted price process Y(k) is approximately geometric Brownian motion. By translating back to the original price process, X(t) is the approximately geometric Brownian motion with a mean function [Delta](t) and a variance [[Sigma].sup.2]t. This result theoretically substantiates the empirical evidence cited previously which suggested that geometric Brownian motion gives good approximations to the sample paths for discounted prices. Moreover, it supplies the development desired by Grossman and Shiller (1982 p.197) to justify (at least asymptotically) on economic grounds the commonly made mathematical assumption of geometric Brownian motion (Ito process).

Concluding Remarks

In this article, it has been pointed out that both the original independent increments (random walk) and the martingale (fair game) models for quantifying the intuitive notion of arbitrage free equilibrium pricing as used in insurance pricing, actuarial science, and financial modeling have specific implications for the relevant statistical distributions of the underlying price processes. It is shown that the models used by actuaries and researchers in insurance involving Brownian motion and Ito process are not ad-hoc models utilized for mathematical convenience, but rather can be obtained (at least to a level of approximation) as a conclusion of the efficient markets hypothesis from finance. These results also demonstrate the relevance of the lognormal distribution for discrete time models and show that theoretical results support, in a world of approximations, the use of lognormal or geometric Brownian motion processes in insurance pricing models involving financial and economic factors. [Figure Omitted]

(1)The martingale model, according to some authors (e.g. Ross 1987), encompasses the concept of no arbitrage. At the very least, it can be shown that when no arbitrage is possible, there exists an equivalent probability measure which gives the prices and for which the process of returns is a martingale. (See Ross (1989, p. 3 for a discussion and literature). (2)This is a more restrictive formulation than Samuelson's "fair game" formulation of market efficiency which is investigated in the next section. (3)It should be noted that the geometric nature of the way in which returns are compounded implies a multiplicative stochastic structure, since arbitrage does not operate on prices themselves, but instead works to eliminate excess returns. Consequently, it is usually the stochastic process of rates of return lnX(t) that have independent increments in this traditional formulation of the efficient markets hypothesis. (4)Due to a general central limit theorem (c.f. Breiman (1968; Chapter 9.8)), the stable processes are also of this third component type and have been put forth as equity asset return models. (5)Brownian motion models are also used in collective risk theory for economically related processes. The approximation of compound Poisson processes by Brownian motion processes can be proven, and the economic rationale for the models can be developed from the results of the current paper. (6)Other even more general mathematical definitions of market efficiency with respect to information available in the market place are possible. Latham (1986) reviews the newer attempts to mathematically formalize the notions implicit in the efficient hypothesis and develops a one period (i.e. discrete time) definition of efficiency which satisfies a "subset property". (7)Lucas (1978) has shown that in an asset market in which prices follow a stationary Markov process with strictly positive transition probabilities, the rates of return determined endogenously need not follow a martingale process, but rather there is some other discounted process [w.sub.t] which will instead form a martingale process. Since stationary Markov chains in which all states communicate are ergodic, the development given in the sequel can be translated to the [w.sub.t] process if one is working within the Lucas framework. Samuelson's formulation is used for ease of exposition and simplicity in interpretation of the results. (8)This assumption does not imply, however, that the original stochastic process X(t) for prices is stationary, since cyclic variation, for example, may be present in X(t) and removed from the rate of return quotient process [Xi] (t) by the market participants by appropriately selecting the discounting function [Delta] (t) in Y(t). (9)For readers unfamiliar with the mathematical concept of ergodicity, it can be illustrated in the context of a game of pool. If a player hits a ball so that it strikes one of the tables edges at 90 [degrees], the ball will rebound and follow the same path back. If there were no friction and no pockets then the ball would continue to go back and forth following the same trajectory. This is an example of periodicity where the ball only passes over certain points on the pool table. Ergodicity, on the other hand, would be observed if the ball were struck so that it came off of the wall at an irrational angle other than 90 [degrees]. Instead of following the same path back, it would follow different paths every time it rebounded off one of the walls. If there were no friction and pockets then the ball would eventually pass over every point on the table. Thus, the ball's movement with respect to the pool table would be characterized as an ergodic process. (10)The results here pertain to price distributions when there is economic efficiency and no arbitrage, however if one assumes that price distributions are mere reflections of loss distributions, then under certain strong assumptions, an approximation of loss distributions by lognormal models may be justified in certain circumstances. This approximate lognormality has been found for some loss distributions and not others, so the approximation for loss distributions is by no means universal. Economic efficiency works primarily on the prices and not the losses in these cases.

References

Aitchison, J. and Brown, J. A. C., 1957, The Lognormal Distribution, Cambridge University Press. Bachelier, L., 1900, Theorie de la Speculation, Annales de l'Ecole Normale Superieure, 17, 21-86. Translated in Cootner, P. H., Ed. (1964), The Random Character of the Stock Market Prices, MIT Press, Cambridge, Massachussetts, 17- 75. Bellhouse, D. R. and H. H. Panjer, 1981, Stochastic Modelling of Interest Rates with Applications to Life Contingencies - Part II, Journal of Risk and Insurance, 48: 628-37. Black, F. and M. Scholes, 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, 81: 637-54. Billingsly, P., 1968, Convergence of Probability Measures, New York; Wiley. Blattberg, R. C. and N. J. Gonedes, 1974, A Comparison of Stable-Paretian and Student Distributions as Statistical Models for Stock Prices, Journal of Business, 47: 244-80. Bowers, N. L., H. Gerber, J. Hickman, D. Jones and C. Nesbitt, 1986, Actuarial Mathematics, Itasca, Illinois: Society of Actuaries. Boyle, Phelim P., 1977, Financial Instruments for Retired Homeowners, Journal of Risk and Insurance, 44: 513-20. Boyle, Phelim P., 1978, Immunization under Stochastic Models of the Term Structure, Journal of the Institute of Actuaries, 105: 177-87. Boyle, Phelim P., and Edward S. Schwartz, 1977, Equilibrium Prices of Guarantees Under Equity - Linked Contracts, Journal of Risk and Insurance, 44: 639-60. Boyle, Phelim P., 1976, Rates of Return as Random Variables, Journal of Risk and Insurance, 43: 693-713. Boyle, Phelim P., 1980, Recent Models of the Term Structure of Interest Rates with Actuarial Applications, Transactions of the 21st International Congress of Actuaries, Topic 4, 95-104. Boyle, Phelim P., 1987, Perspective on Mortgage Default Insurance, in: I.B. MacNeill and G.J. Umphrey, eds., Actuarial Science, Advances in the Statistical Sciences, 6:185-99. Braun, H., 1986, Weak Convergence of Asset Processes with Stochastic Interest Returns, Scandinavian Actuarial Journal, 98-106. Brieman, L. 1968, Probability, Addison-Wesley, Reading, Mass. Buhlmann, Hans, 1987, Actuaries of the Third Kind? ASTIN Bulletin, 17: 137- 38. Campbell, R. A. 1980, The Demand for Life Insurance: An Application of the Ecnomics of Uncertainty, The Journal of Finance, 35: 1155-172. Cootner, P. H., 1974, The Random Character of Stock Market Prices, MIT Press, Cambridge, Massachusetts. Cox, J.C., S. A. Ross, and M. Rubinstein, 1979, Option Pricing: A Simplified Approach, Journal of Financial Economics, 7: 229-63. Cox, J. C., J. E. Ingersoll Jr, and S. A. Ross, 1981, A Reexamination of Traditional Hypotheses about the Term Structure of Interest Rates, The Journal of Finance, 36: 769- 99. Cummins, J. David, 1988, Risk Based Premiums for Insurance Guaranty Funds, Journal of Finance, 43: 823-39. De Vylder, F., 1977, Martingales and Ruin in a Dynamical Risk Process, Scandinavian Actuarial Journal, 217- 225. Doherty, Neil, and James. R. Garven, 1986, Price Regulation in Property-Liability Insurance: A Contingent-Claims Approach, Journal of Finance, December , 41: 1031-1050. Emanuel, D. C., J. M. Harrison, and A. J. Taylor, 1975, A Diffusion Approximation for the Ruin Function of a Risk Process with Compounding Assets, Scandinavian Actuarial Journal, 240-47. Fama, E. F., 1970, Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25: 383-417. Fischer, S., 1975, The Demand for Index Bonds, Journal of Political Economy, 83: 509-34. Gerber, H., 1979, An Introduction to Mathematical Risk Theory, Huebner Foundation Monograph No. 8, Wharton School, University of Pennsylvania, Richard D, Irwin Press Inc. Giaccotto, Carmelo, 1986, Stochastic Modeling of Interest Rates: Actuarial Vs Equilibrium Approach, Journal of Risk and Insurance, 3: 435-53. Gikhman, I. and A. V. Skorohod, 1969, Introduction to the Theory of Random Processes, W. B. Saunders Co., Philadelphia, Pennsylvania. Grossman, S. and R. Shiller, 1982, Consumption Correlatedness and Risk Measurement in Economies with Non-Traded Assets and Hetrogeneous Information, Journal of Financial Economics, 10: 195-210. Hogg, R. V. and S. A. Klugman, 1984, Loss Distributions, John Wiley Press. Ingersoll, J., 1976, A Theoretical and Empirical Investigation of Dual Purpose Funds: An Application of Contigent Claims Analysis, Journal of Financial Economics, 3: 83-123. Kraus, A. and S. A. Ross., 1982, The Determination of Fair Profits for the Property-Liability Insurance Firm, Journal of Finance, September, 37: 1015- 028. Langetieg, T. C., 1980, A Multivariate Model of the Term Structure, Journal of Finance, 35: 71- 97. Latham, Mark, 1986, Informational Efficiency and Information Subsets, Journal of Finance, 51, 39-52. Lintner, J., 1965, Security Prices, Risk and Maximal Gain From Diversification, Journal of Finance, 30: 587-615. Lintner, J., 1972, Equilibrium in a Random Walk and Lognormal Securities Market, Harvard Institute of Economic Research, Discussion Paper No. 235, Harvard University, Cambridge, Mass. Lucas, R. E., 1978, Asset Prices in an Exchange Economy, Econometrica, 46: 1429-445. Martin-Lof, A. 186, A Stochastic Theory of Life Insurance, Scandinavian Actuarial Journal, 65-81. Merton, R. C., 1969, Lifetime Portfolio Selection under Uncertainty: The Continuous Time Case, Review of Economics and Statistics, 41: 247-59. Merton, R.C., 1970, Optimum Consumption and Portfolio Rules in a Continuous Time Model, Journal of Economic Theory, 3: 373-413. Merton, R.C., 1974a, Theory of Rational Option Pricing, Bell Journal of Economics, 4: 141-183. Merton, R. C., 1974b, An Intertemporal Capital Asset Pricing Model, Econometrica, 41: 867-888. Merton, R. C., 1976, Option Pricing When Underlying Stock returns are Discontinuous, Journal of Financial Economics, 3: 125-144. Merton, R. C., 1977, An Analytic Derivation of the Cost of Deposit Insurance and Loan Guarantees: An Application of Modern Option Pricing Theory, Journal of Banking and Finance, 1: 3-11. Merton, R. C., and P. A. Samuelson, 1974, Fallacy of Lognormal Approximation of Optimal Decision Making Over Many Periods, Journal of Financial Economics, 1: 67-94. Panjer, H. H. and G. E. Willmot, 1989, Insurance Risk Models, Lecture Notes, Act SCI 431, University of Waterloo, Waterloo Ontario. Panjer, H. H., and D. R. Bellhouse, 1980, Stochastic Modelling of Interest Rates with Applications to Life Contingencies, Journal of Risk and Insurance, 47: 91-110. Paulson, A. S., and N. J. Faris, 1985, A Practical Approach to Measuring the Distribution of Total Annual Claims, in J. D. Cummins, ed., Strategic Planning and Modeling In Property-Liability Insurance, Norwell, MA: Kluwer Academic Publishers. Ross, S. A., 1987, Arbitrage and Martingales with Taxation, Journal of Political Economy, 95: 371-393. Ross, S. A., 1989, Information and Volatility: the No-Arbitrage Martingale Approach to Timing and Resolution Irrelevancy, Journal of Finance, 44: 1-18. Samuelson, P. A., 1965, Proof that Properly Anticipated Prices Fluctuate Randomly, Industrial Management Review, 6: 41-49. Samuelson, P. A., 1970, The Fundamental Approximation Theorem of Portfolio Analysis in Terms of Means, Variances and Higher Moments, Review of Economic Studies, 37: 537-42. Samuelson, P. A., 1972, Mathematics of Speculative Prices, Mathematical Topics in Economic Theory and Computation, R. H. Day and S. M. Robinson (Editors), Society for Industrial and Applied Mathematics, Philadelphia, 1- 42. Samuelson, P. A., 1974, Proof that Properly Discounted Present Values of Assets Vibrate Randomly, Bell Journal of Economics, 4: 369-74. Samuelson, P. A., and R. C. Merton, 1974, Generalized Mean-Variance Tradeoffs for Best Perturbation Corrections to Approximate Portfolio Decisions, Journal of Finance, 29: 27-40. Sharp, K., 1989, Mortgage Rate Insurance Pricing under an Interest Rate Diffusion with Drift, Journal of Risk and Insurance, 56: 34-45. Sharpe, W. F., 1976, Corporate Pension Funding Policy, Journal of Financial Economics, 3: 183-193. Smith, C. W., Jr., 1986, On the Convergence of Insurance and Finance Research, Journal of Risk and Insurance, 53: 693-717. Tapiero, Charles S, and L. Jacque, 1987, The Expected Cost of Ruin and Insurance Premiums in Mutual Insurance, Journal of Risk and Insurance, 54: 594-602. Wilkie, A. D., 1987, An Option Pricing Approach to Bonus Policy, Journal of the Institute of Actuaries, 114: 21-77; Discussion, 78-90. Yeh, J. J., 1973, Stochastic Processes and the Wiener Integral, Marcel Dekker, Inc., New York.

Senior Research Fellow, IC2 Institute; Joseph H. Blades Professor of Insurance; Professor Departments of Finance, Mathematics, and Management Science and Information Systems; and Research Scientist, Applied Research Laboratories, The University of Texas at Austin. Gus Wortham Chaired Professor of Risk Management and Insurance, Department of Finance, The University of Texas at Austin.

The increased volatility of economic and financial-risk factors such as inflation, interest rates, investment returns, and exchange rates during the past decade have forced consideration of more financial factors along with underwriting risk factors in insurance pricing models. Moreover, competition in financial services markets have forced insurers to move more into the financial arena, for example, in terms of products tied to investment performance, discounting of loss reserves, actuarial modeling of investment strategies, hedging interest-rate risks, and the internationalization of insurance operations. As a consequence, some of the research in finance and in risk management and insurance have started to converge as noted by Smith (1986), and Buhlmann (1987). Hence, more of the intertemporal models utilized in insurance and actuarial applications involving financial linkages attempt to incorporate the financial concepts of market efficiency and the equilibrium notions underlying competitive market structures (e.g., Kraus and Ross 1982 and Cummins 1988).

The same continuous time, stochastic process models are being used for insurance and asset pricing by scholars in risk management and insurance and by researchers in finance. One reason for this convergence is that insurers have most of their assets in financial instruments (e.g. bonds, stocks and mortgages for life insurance companies and stocks and bonds for property casualty companies) and their liabilities consist of interest sensitive components, such as reserves in both life-health and property-liability insurance which are discounted to a specific valuation date. Consequently, this article analyzes the probabilistic implications of efficiency and equilibrium from the perspective of potential stochastic models pertinent to actuarial calculations or insurance pricing involving financial transactions in an efficient capital market in equilibrium.

Intuitively, an efficient capital market is the manifestation of a market system that works in a cost-effective manner, and the study of efficient markets is a study of the (stochastic) process of price formation, or equivalently of the return generating stochastic process, and the market's adjustment to a sequence of relevant information subsets. However, the primitive notion from finance that "in equilibrium, price efficiency implies that prices reflect all relevant information" is too general to have any practical quantitative applications for actuarial modeling of insurance products affected by financial prices. To quantitatively formalize and model this intuitive notion of an efficient market, scholars in insurance, actuarial science and finance have developed several approaches to describing the stochastic process of prices. Two of these are the traditional "independent increments" or random walk model familiar to actuaries from risk theory, and the more general "fair game" or martingale model.(1)

Some scholars, such as Cummins (1988), Boyle (1977), Black and Scholes (1972), and Boyle and Schwartz (1977), assume that rates of return, for example on stocks or bonds, follow a Brownian motion process. While there is some empirical support for the implied lognormality of the corresponding prices at any fixed point in time, it would be desirable and preferable to complement this with an economically based theoretical argument showing why such continuous time probability models arise as a consequence of basic economic notions. Grossman and Shiller (1982, p. 197) also appeal for even a further basic economic rationale for the Brownian motion models which they use.

The Brownian motion and stochastic calculus models referred to above are widely used in insurance and actuarial research, for example see: Emanuel, Harrison and Taylor (1975), Boyle (1977), Martin-Lof (1986), Cummins (1988), and Sharp (1989). The modeling of stochastic interest rates or the modeling of equilibrium rates of return for insurers are of particular importance, as illustrated by Boyle and Schwartz (1977), Giaccotto (1986), Braun (1977), Cox, Ingersoll and Ross (1981), and Langetieg (1980).

In this article, a justification for some of these models is supplied by showing how the distributional properties relate to the concepts of arbitrage free equilibrium and continuous time trading. The fact that pricing models based on arbitrage arguments have specific distributional implications is not generally well known or understood, and the objective of this article is to clarify further this issue. A mathematical proof is developed to demonstrate that the Brownian motion assumption is, indeed, a valid approximate probability distribution for price distributions for the study of market efficiency.

In the next section, a brief history of the random walk hypothesis of asset prices is presented. Then, the traditional random walk mathematical implementation of the efficient markets theory is evaluated: A complete characterization of probability distributions that satisfy this first primitive "independent increments" model of efficiency is presented. It is shown that, with continuous trading and continuous price changes, only a Brownian motion process for returns (or the lognormal distribution for prices) is possible. Next, the modern and more general fair game definition of efficient markets is examined. Some authors view this definition as equivalent to the no arbitrage condition. It is shown that even in this more general setting, for dynamic temporally efficient markets, asset return movements can be approximated by Brownian motion, and hence prices can be approximated by the lognormal distribution. Concluding remarks are then provided.

Background

The study of distributional implications of the random walk hypothesis can be traced back to Bachelier's (1900) seminal development of arithmetic (or absolute) Brownian motion as well as his pioneering work in the application of this stochastic process to financial data. As elucidated in Samuelson (1972), Bachelier predated Einstein in the development of Brownian motion and, in effect, posited a property of the probability distribution that is presently referred to as the Wiener (or arithmetic) Brownian motion process. Bachelier gives three or four purported proofs that the resulting marginal distributions must be normal. As noted by Samuelson (1972), however, any member of the stable family of distributions also satisfies the properties outlined by Bachelier. Subsequently, there have been numerous authors who have examined the use of stable distributions for equilibrium asset pricing. Bachelier's results, however, are obtained under the implicit assumption of finite variances because the normal distribution is the only member of the stable family with finite variances.

To circumvent the unrealistic characteristics associated with using a Brownian motion model for the price process (e.g., the possibility of negative prices and unlimited liability, Samuelson (1972) proposed replacing the arithmetic Brownian motion process by the geometric Brownian motion process model. With this process, a lognormal distribution of prices is obtained. The lognormal distribution has a long history of use in insurance and economics (e.g., Aitchison and Brown (1957), Boyle (1976) and Hogg and Kluggman (1984)). As noted in Samuelson (1972), the central limit theorem ensures that, in a wide variety of cases, the probability distribution can be approximated for long holding periods (asymptotically) by a lognormal distribution. Furthermore, as noted in Samuelson and Merton (1974) and Merton and Samuelson (1974), one can, with proper accounting of limiting arguments, still use the mean-variance analysis (in logarithmic return units) which is most familiar to researchers in insurance, finance, and economics. Also, as noted in Samuelson (1970 and 1972), lognormal distributions, and hence mean-variance analysis, appear appropriate in the instantaneous case of continuous trading where the holding period goes to zero.

As noted in Merton (1969), if the asset returns have a joint multivariate lognormal distribution, then, even though portfolios of lognormal asset returns are not exactly lognormally distributed for any finite holding period analysis, the process of continuous revisions will leave the distribution of portfolios lognormally distributed. This property (similar to that of the normal distribution property) has been used by Merton (1969, 1970, 1974a, and 1974b), Black and Scholes (1972) and Fischer (1975) in solving portfolio selection problems, market equilibrium analysis, pricing of options and corporate liabilities, and pricing of index bonds in a mean-variance framework. These option pricing models and the general diffusion model approaches they introduced have been extensively applied in insurance pricing, pension funding, and immunization theory (e.g., Cummins (1988), Boyle (1978, 1980, 1987), Doherty and Garven (1986), Tapiero and Jacque (1987), Merton (1977), Sharp (1976), and Wilkie (1987)). The next two sections justify these assumptions for many applications relevant to insurance modeling.

Distributional Implications of the Traditional Independent Increments Definition

In this section it is shown that a primitive implementation of the intuitive notion of an efficient market, namely that returns from an independent increments process (the so-called random walk hypothesis), have specific distributional implications. Fama (1970) calls this the weak form of the efficient markets hypothesis which most academics agree has some empirical validity. The implied distributions include all those models used in the literature to derive financial valuation models in dynamic temporal efficient markets, as illustrated by Merton's jump processes (1976), or the binomial model of Cox, Ross and Rubinstein (1979), or the familiar compound Poisson model from risk theory which Press (1967) uses for stock prices, or the geometric Brownian motion model of Samuelson (1972), and continuous time pricing models in insurance, see: Merton (1977), Kraus and Ross (1982), and Cummins (1988).

Like previous researchers, it is assumed throughout this article that the stochastic process under consideration is stationary. Some form of the stationarity assumption is necessary to analyze data over time (such as the stochastic process of prices or claims) because of the need to use past data to obtain information about the future. If nonstationarity of a known type is present (e.g. known trend or cycles) then a transformation can be made to obtain a stationary process which can be subjected to statistical analysis. If, on the other hand, the process is assumed to be nonstationary and the structure of the nonstationarity is also unknown, so that no transformation to a stationary process is possible, then any analysis is futile. Essentially, in this situation, only a single sample of size one is observed at each time with no model for relating the temporal observations. Thus, the stationarity postulate is necessary for the analysis.

The independent increments formulation for implementing the notion of a stochastic process in an efficient market is the traditional approach and can be formulated mathematically as follows: Let X(t) denote the insurance price at time t, and consider the returns over non-overlapping intervals of time [t.sub.0] < [t.sub.1] < ... < [t.sub.n]. The motivation for this traditional definition is that the return rate variables over disjoint time intervals, e.g., [Mathematical Expression Omitted] should not contain information about each other, and hence should be independent random variables.[2] This implementation of efficiency of pricing in the insurance market amounts to saying that the process lnX(t) is a stochastic process with independent increments.(3)

The assumption of independent increments is one of the earliest and most common procedures used to implement the efficient markets concept in theoretical modeling and it contains the random walk concept often used in risk theory modelling. This assumption has very specific distributional implications, as shown below.

The following stochastic structure theorem (c.f. Gikhman and Skorohod, 1969), which holds for Z(t) = lnX(t), shows precisely the relationship among the random walk, the binomial, the Poisson jump, and Brownian motion models for returns.

Theorem 1: (Structure for Processes with Independent Increments)

If Z(t) is any process with independent increments, then Z(t) can be decomposed as Z(t) = [Z.sub.1] (t) + [Z.sub.2] (t) + [Z.sub.3] (t) where [Z. sub.1] (t) is a process whose sample paths have discontinuities at fixed times, [Z.sub.2] (t) is a Gaussian process with independent increments, and [Z.sub.3] (t) is a [possible limit of a] Compound Poisson Process with jump discontinuities which occur at random times. Moreover, the three processes are independent. The uses for financial modeling of each of these components is discussed below.

The process [Z.sub.1] (t) is a sum of independent random variables with the times of the sample path jumps known in advance. Of course, assuming stationarity in distribution implies that the size of the jumps are identically distributed random variables as well. This corresponds to the random walk hypothesis for (daily, weekly or monthly) prices. If the prices are allowed to change only by the amounts [+ or -] 1 with probalities p and (1 - p) respectively occurring at times h, 2h, 3h, . . . , then the binomial jump model of Cox, Ross and Rubinstein (1979) is obtained.

For many insurance models (eg. risk processes), the assumption of stochastic continuity is perhaps a more realistic model than the fixed time jump process [Z.sub.1] (t). Stochastic continuity of prices says that no jumps in the price can be anticipated in advance. This is the same as stating Pr[Z(.) is discontinuous at [t.sub.0]] = 0, for all [t.sub.0], or equivalently that P - lim t [right arrow] [t.sub.0] Z(t) = Z([t.sub.0]). Under this reasonable assumption, the component [Z.sub.1] (t) = 0, i.e., there are no fixed points of discontinuity. However, this does not rule out discontinuous sample paths. Indeed the Poisson process used in claims modelling of insurer operations (c.f., Panjer and Willmot 1989), and the stable processes used in insurance loss processes (c.f. Paulson and Faris 1985) both have purely discontinuous (i.e. step) sample paths, but the exact time at which a jump will occur is not known in advance of its occurrence. Both of the two component processes [Z.sub.2] (t) and [Z.sub.3] (t) are stochastically continuous. It has been argued (c.f. Cambell 1980) that this is most appropriate for many life insurance stochastic processes.

The process [Z.sub.2] (t) is a generalized Brownian motion; it has independent increments, and for fixed t, the distribution of [Z.sub.2] (t) is normal with mean [Mu](t) and variance [[Sigma].sup.2] (t). Moreover, the function [[Sigma].sup.2] (t) is an increasing function of t, and the sample paths of [Z.sub.2] (t) are not only continuous in probability but are actually continuous with probability one. If the original process Z is stationary, then [Mu] (t) = t [Mu] and = [[Sigma].sup.2] (t) = t [[Sigma].sup.2] for fixed numerical constants [Mu] and [[Sigma].sup.2], and [Z.sub.2] is ordinary Brownian motion. In the context of modeling the rates of returns on investments or the underwriting rate of return, this corresponds to prices X(t) following a geometric Brownian motion process as postulated by Samuelson (1972). In fact, the assumption of continuous sample paths, and independent increments for the returns lnX(t) is necessary and sufficient for the prices X(t) to follow a generalized geometric Brownian motion. If the original implementation of the definition of market efficiency is used (e.g. independent increments), and if prices are assumed to change continuously in time, then it is a conclusion that prices are lognormal; in fact the entire process is generalized Brownian motion (See Yeh (1973) Chapter 3 for a proof of this result). Note that this same argument applies to the pricing of bonds whose use for interest rate calculations are so fundamental in insurance.

The final component in the structure theorem, [Z.sub.3] (t), is also stochastically continuous, but every sample path is a step function. The times of these jump discontinuity steps are random, however, and cannot be predicted with certainty prior to observation. It can be shown that the process [Z.sub.3] (t) is (perhaps a limit of) a compound Poisson process. To illustrate this, suppose N(t) is a Poisson process with intensity (mean) [Lambda] (t) up to time t, and that [X.sub.1], [X.sub.2], . . . are independent and identically distributed, then [Mathematical Expression Omitted] in distribution where C is constant. In the case where N(t) denotes the number of (jump) price changes up to time t, and [X.sub.i] denotes the size of the [i.sup.th] price change, this becomes an often used formulation for prices which follow a random walk, but in which the number of transactions (trading volume) involved per period is also explicitly acknowledged to be a random variable. These compound Poisson stochastic processes provide the foundation for traditional collective risk theory models and financial ruin theory in actuarial science (c.f., Bowers, et. al. 1986) and have numerous applications in insurance theory (c,f, Panjer and Willmot (1989)).(4) They have purely discontinuous sample paths. Merton's (1976) jump process model for asset prices is also a special case of the above development, as is the Press (1967) model for stock prices, and most risk theory processes.(5)

In conclusion, the traditional formulation of market efficiency using independent increment stochastic processes has definite distributional implications. In particular, the only stationary return process with independent increments that allows for continuous price changes is the Brownian Motion process. This return process goes a long way towards providing a justification of the geometric Brownian motion (lognormal) model that has been postulated in many continuous time situations (c.f. Black and Scholes (1973), Cummins (1988), Samuelson (1970)). Moreover, it also gives theoretical justification for the empirically observed good fit of the lognormal distribution to security price data (e.g. Cootner (1974), Lintner (1972), and Blattberg and Gonedes (1974)).

A Second Weaker "Fair Game" Definition of Market Efficiency and Its Distributional Implications

The independent increments definition of efficiency is the older and much more restrictive (perhaps even misleading) approach to modeling efficiency. The most succinct and general definition of market efficiency is the lack of predictable arbitrage possibilities. Samuelson's fair game model (1965 and 1974) formalizes this intuitive definition by arguing that appropriately discounted successive prices should form a martingale stochastic process. This is the more modern approach to implementation of these intuitive notions. In fact, Ross (1987) and others believe the martingale model formalizes the definition of the no arbitrage condition(6). Much of the actuarial-risk model literature is derived using a martingale method (c.f. Gerber 1979 and de Vylder 1977).

To be mathematically specific, let X(t) denote the price of an asset at time t. Then, to properly adjust for long term inflationary trends, riskless market returns, underwriting cycles, and other economic factors, all prices are discounted to a common comparison date, say t = 0. Accordingly, Samuelson lets Y(t) = exp{ - [Delta] (t)}X(t) denote the properly discounted price. If seasonal or cyclical effects are present (as in certain insurance lines), they are assumed to be incorporated into the adjusted prices Y(0), Y(1), . . . through the discount factor [Delta] (t). Roughly speaking, Samuelson's argument in support of a martingale formulation for equilibrium prices in an efficient market is as follows. If the return quotient [Mathematical Expression Omitted] was expected to be positive at some future time t + u given the information available about the prices up to time t, then buyers of insurance would flock to the market at time t to take advantage of the opportunity to insure their risk at a lower price. The demand for the coverage at time t would go up and the prices would rise until the perceived excess return disappeared. Similarly if the expected value of the return quotient was negative, then market participants would tend to forgo insurance for that length of time while waiting for a lower price. This in turn would drive the price at time t down until the perceived price inequity disappeared.

Formulated mathematically, the above argument says that the equilibrium price in an efficient market should satisfy [Mathematical Expression Omitted], where [Mathematical Expression Omitted] is the logarithm of the return on the assetat time t. This may be stated equivalently in terms of the rate of return process as [Mathematical Expresson Omitted]. This is Samuelson's celebrated fair game or martingale property of asset returns; that is 1nY(t) is a martingale.(7) For more formal development of the martingale consequences of efficiency; see Samuelson (1972 and 1974).

It is now shown that the geometric Brownian motion process model for dynamic prices is a consequence of the fair game (martingale) representation of prices in a temporally efficient markets and not just an assumption made for computation purposes. It follows that those who believe in efficient markets (as a fair game model) should also believe that geometric Brownian motion models provide reasonable approximations for efficient markets.

As noted above, formalizing the notion of market efficiency in terms of a fair game model implies that the stochastic process lnY(t)t = 1,2, . . . forms a martingale sequence, where Y(t) = exp{-[Delta](t)}X(t) is the properly discounted present value at time t. Here t might refer to days, weeks, months, or any sufficiently non-zero length of time. The requisite that t not be too close to zero in magnitude is made so that the physical trading process in the case of equity or bond market prices or competitive pricing reactions in the case of insurance contracts have sufficient time to respond to unbalanced expectations and force the martingale property to hold.

In addition to the martingale property of lnY(t), which follows from the fair game formulation of market efficiency, the two other specifications on the process [Mathematical Expression Omitted] are included. These two other specifications are intuitively consistent with the unpredictability notion of a competitive and efficient market. First, assume [Epsilon](t) is stationary. As noted before, this should be the case if there are no anticipated structural changes occurring in the returns for which investors can adjust.(8) It might also be noted that this assumption is very common in the finance, economics and insurance literature, because it is natural to attempt to use past data to fit unknown parameters in a stochastic model. The stationarity assumption helps us to obtain the necessary parameter estimates.

The final assumption required is that [Epsilon](t) be an ergodic process.(9) Essentially this means that time averages and state space averages coincide. The assumption of ergodicity has two desirable properties. First, since one can only observe a single sample path of the stochastic price process in prior and current economic states of nature, ergodicity allows us to replace space averages (averages over possible states of nature for a future point in time) by time averages (averages of a single state of nature over past points in time). Second, ergodicity implies that the limiting equilibrium distribution is independent of the starting point of the stochastic process. If the limiting distribution is viewed as characterizing stochastic equilibrium, the ergodic property extends the notion of deterministic equilibrium, which is independent to the starting point, to stochastic equilibrium.

Ergodicity is tacitly assumed by most authors doing empirical research involving equity and bond prices because they estimate, say, the average price in some future potential state of the economy, by the using the average of past prices (which reflect just one state of the world at each point observed over time). Obviously only a single realization of the sample path [Epsilon](t,[w.sub.1]) can be observed (namely that corresponding to the state of nature or state of the economy [w.sub.1] at each point in time). However, any inference concerning expected values at a future time t + u involves averaging over the ensemble of possible economic states of nature {[[Omega].sub.1]}. Thus, this inference must be based on averages of the single sample path which was observed over time. The assumption that the past values of the single sample path [Epsilon](t, [[Omega].sub.1] over time give information about expected values of [Epsilon](t+u) over w requires ergodicity.

Although some authors, (e.g., Lucas, (1978)) assume that asset prices form a stationary Markov process, it can be shown that a stationary Markov chain with positive transition probabilities is ergodic. Thus, this assumption is also a common one in theoretical developments as well. However, it should be noted that the stationary ergodic assumption made in this article is weaker or more general than those made by many other scholars conducting empirical and theoretical work. The precise form of stationary ergodic process governing the discounted prices e.g., the ARMA process as used by Panjer and Bellhouse (1980), Bellhouse and Panjer (1981) or the Ito process used by Cummins (1988) and others is not specified in advance.

It is interesting to observe that just the assumptions of stationarity and ergodicity by themselves firmly embody the intuitive interpretation that sequences of past prices cannot exhibit statistically exploitable patterns. The following theorem from the literature of engineering statistical communications can be used to show that in the stationary ergodic world assumed by most authors, the process of technical analysis is essentially assumed to be futile.

Theorem 2: (McMillian's Equipartition Theorem)

Let [V.sub.1], [V.sub.2], . . . be a stationary ergodic process with a finite number K of possible values for each [V.sub.i]. Then there is a number H (called the entropy of the process) such that for any given number [Epsilon] > 0 and for n sufficiently large, the set of all [K.sub.n] possible sequences of observed values ([v.sub.1],[v.sub.2], . . ., [v.sub.n]) is decomposable into the disjoint union of two sets [A.sub.n] and [B.sub.n]. These two sets satisfy the following:

[Mathematical Expression Omitted]

and for each [Mathematical Expression Omitted]

exp [Mathematical Expression Omitted]

Intuitively, McMillian's Equipartition Theorem indicates that, except for a set of possible sequences of length n which has only a very small chance of occurring (probability less than [Epsilon]), all possible sequences of length n are essentially equally likely to be observed, that is, the probability of each sequence being about exp {-n H}). In the context of prices, this says that with a very high probability, in terms of likelihood, a particular price of return sequence of length n with which nature has presented and which has actually been observed, is essentially interchangeable with almost any other length n sequence. Of course the other length n sequences not observed could contain quite different patterns than the pattern displayed by the observed data. The unpredictability of asset prices and the futility of technical analysis (pattern recognition techniques) in a stationary ergodic asset market are thus drawn as a conclusion from the stationarity and ergodic assumptions which underlie most of the literature involving an efficient market.. A similar conclusion applies to insurance pricing models whose foundation rests on these two postulates.

Until now, nothing has been said about the nature of the statistical distribution of the rate of return process. By including the economic assumption that there is no arbitrage opportunity in the pricing of contracts over time (as implemented by Samuelson's martingale assumption) the following very strong approximation theorem for price distributions can be obtained.

Theorem 3:

Let X(t) denote the price and Y(t) = exp{-[Delta](t)}X(t) the properly discounted present value of X(t). If the market is efficient in the sense that the return process [Mathematical Expression Omitted] is a square integrable, stationary, ergodic process, and lnY(t) has the martingale property then X(t) is approximately lognormal. Moreover the entire price process [Mathematical Expression Omitted] is simultaneously approximated by a geometric Gaussian process over the entire time interval.

Proof:

The proof follows from a theorem of Billingsly and Ibragimov on stationary ergodic processes with martingale differences, see: Billingsly (1968) Theorem 23.1. Their theorem states the following:

Let [Mathematical Expression Omitted] be a stationary ergodic process for which [Mathematical Expression Omitted] and let [Mathematical Expression omitted]. Define the Donskerlike partial sums by simply linearly connecting the points, [S.sub.1.], [S.sub.2], . . . Mathematically this connected curve may be written as [S.sub.[nt]]([Omega]) = [Mathematical Expression Omitted]. Then the stochastic process [Mathematical Expression Omitted] converges weakly as a stochastic process to Brownian motion. That is all the finite dimensional distributions of [Mathematical Expression Omitted] converge to the corresponding finite dimensional distributions for Brownian Motion.

Returning to the determination of the price process in an arbitrage-free and efficient-market framework, [Mathematical Expression Omitted] in the Billingsly-Ibragimov theorem can be used. The arbitrage-free, efficient-market hypothesis implies [lnY.sub.n] is a martingale, and hence [Mathematical Expression Omitted]. By the multiplicative property of the return process, [S.sub.k] can be calculated as follows:

[Mathematical Expression Omitted]. It then follows from the above quoted theorem of Billingsly and Ibragimov that converges [S.sub.[nt]] to Brownian motion.

An examination of the stochastic process [S.sub.[nt]] in the above framework should be helpful. For ease of illustration, take Y(0) = 1. The sum [S.sub.[nt]] is like a dependent random walk: [S.sub.[nt]] takes on the values [S.sub.1], [S.sub.2], [S.sub.3], . . . at the points [Mathematical Expression Omitted]... and is linearly connected in between these points, as shown in Figure 1.

An appropriate dependent martingale central limit theorem might assert that (when approximately normalized) the partial sum [S.sub.n] = lnY(n) would be approximately normal (and hence justify the discrete lognormal models used, for example, by Doherty and Garven (1986))(10), but the Billingsly-Ibragimov Theorem says much more! The convergence result in Theorem 3 asserts that the entire path of the dependent random walk lnY(n) = [S.sub.n] during the first n-steps is distributed approximately as the entire path up to t = 1 for a particle under Brownian motion. This contains information going far beyond the central limit theorem. Qualitatively, [Mathematical Expression Omitted] converging to Brownian motion says that if t is small, then a return process subject to displacements of size [Epsilon](1), [Epsilon](2) . . ., at successive times [Tau], 2[Tau], . . . will when viewed from afar, appear to perform approximately a Brownian motion. Since [Mathematical Expression Omitted] and [Mathematical Expression Omitted]in the case of insurance contract or bond prices considered in this article, the theorem indicates that lnY(k) is approximately a Brownian motion, or upon exponentiation, the discounted price process Y(k) is approximately geometric Brownian motion. By translating back to the original price process, X(t) is the approximately geometric Brownian motion with a mean function [Delta](t) and a variance [[Sigma].sup.2]t. This result theoretically substantiates the empirical evidence cited previously which suggested that geometric Brownian motion gives good approximations to the sample paths for discounted prices. Moreover, it supplies the development desired by Grossman and Shiller (1982 p.197) to justify (at least asymptotically) on economic grounds the commonly made mathematical assumption of geometric Brownian motion (Ito process).

Concluding Remarks

In this article, it has been pointed out that both the original independent increments (random walk) and the martingale (fair game) models for quantifying the intuitive notion of arbitrage free equilibrium pricing as used in insurance pricing, actuarial science, and financial modeling have specific implications for the relevant statistical distributions of the underlying price processes. It is shown that the models used by actuaries and researchers in insurance involving Brownian motion and Ito process are not ad-hoc models utilized for mathematical convenience, but rather can be obtained (at least to a level of approximation) as a conclusion of the efficient markets hypothesis from finance. These results also demonstrate the relevance of the lognormal distribution for discrete time models and show that theoretical results support, in a world of approximations, the use of lognormal or geometric Brownian motion processes in insurance pricing models involving financial and economic factors. [Figure Omitted]

(1)The martingale model, according to some authors (e.g. Ross 1987), encompasses the concept of no arbitrage. At the very least, it can be shown that when no arbitrage is possible, there exists an equivalent probability measure which gives the prices and for which the process of returns is a martingale. (See Ross (1989, p. 3 for a discussion and literature). (2)This is a more restrictive formulation than Samuelson's "fair game" formulation of market efficiency which is investigated in the next section. (3)It should be noted that the geometric nature of the way in which returns are compounded implies a multiplicative stochastic structure, since arbitrage does not operate on prices themselves, but instead works to eliminate excess returns. Consequently, it is usually the stochastic process of rates of return lnX(t) that have independent increments in this traditional formulation of the efficient markets hypothesis. (4)Due to a general central limit theorem (c.f. Breiman (1968; Chapter 9.8)), the stable processes are also of this third component type and have been put forth as equity asset return models. (5)Brownian motion models are also used in collective risk theory for economically related processes. The approximation of compound Poisson processes by Brownian motion processes can be proven, and the economic rationale for the models can be developed from the results of the current paper. (6)Other even more general mathematical definitions of market efficiency with respect to information available in the market place are possible. Latham (1986) reviews the newer attempts to mathematically formalize the notions implicit in the efficient hypothesis and develops a one period (i.e. discrete time) definition of efficiency which satisfies a "subset property". (7)Lucas (1978) has shown that in an asset market in which prices follow a stationary Markov process with strictly positive transition probabilities, the rates of return determined endogenously need not follow a martingale process, but rather there is some other discounted process [w.sub.t] which will instead form a martingale process. Since stationary Markov chains in which all states communicate are ergodic, the development given in the sequel can be translated to the [w.sub.t] process if one is working within the Lucas framework. Samuelson's formulation is used for ease of exposition and simplicity in interpretation of the results. (8)This assumption does not imply, however, that the original stochastic process X(t) for prices is stationary, since cyclic variation, for example, may be present in X(t) and removed from the rate of return quotient process [Xi] (t) by the market participants by appropriately selecting the discounting function [Delta] (t) in Y(t). (9)For readers unfamiliar with the mathematical concept of ergodicity, it can be illustrated in the context of a game of pool. If a player hits a ball so that it strikes one of the tables edges at 90 [degrees], the ball will rebound and follow the same path back. If there were no friction and no pockets then the ball would continue to go back and forth following the same trajectory. This is an example of periodicity where the ball only passes over certain points on the pool table. Ergodicity, on the other hand, would be observed if the ball were struck so that it came off of the wall at an irrational angle other than 90 [degrees]. Instead of following the same path back, it would follow different paths every time it rebounded off one of the walls. If there were no friction and pockets then the ball would eventually pass over every point on the table. Thus, the ball's movement with respect to the pool table would be characterized as an ergodic process. (10)The results here pertain to price distributions when there is economic efficiency and no arbitrage, however if one assumes that price distributions are mere reflections of loss distributions, then under certain strong assumptions, an approximation of loss distributions by lognormal models may be justified in certain circumstances. This approximate lognormality has been found for some loss distributions and not others, so the approximation for loss distributions is by no means universal. Economic efficiency works primarily on the prices and not the losses in these cases.

References

Aitchison, J. and Brown, J. A. C., 1957, The Lognormal Distribution, Cambridge University Press. Bachelier, L., 1900, Theorie de la Speculation, Annales de l'Ecole Normale Superieure, 17, 21-86. Translated in Cootner, P. H., Ed. (1964), The Random Character of the Stock Market Prices, MIT Press, Cambridge, Massachussetts, 17- 75. Bellhouse, D. R. and H. H. Panjer, 1981, Stochastic Modelling of Interest Rates with Applications to Life Contingencies - Part II, Journal of Risk and Insurance, 48: 628-37. Black, F. and M. Scholes, 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, 81: 637-54. Billingsly, P., 1968, Convergence of Probability Measures, New York; Wiley. Blattberg, R. C. and N. J. Gonedes, 1974, A Comparison of Stable-Paretian and Student Distributions as Statistical Models for Stock Prices, Journal of Business, 47: 244-80. Bowers, N. L., H. Gerber, J. Hickman, D. Jones and C. Nesbitt, 1986, Actuarial Mathematics, Itasca, Illinois: Society of Actuaries. Boyle, Phelim P., 1977, Financial Instruments for Retired Homeowners, Journal of Risk and Insurance, 44: 513-20. Boyle, Phelim P., 1978, Immunization under Stochastic Models of the Term Structure, Journal of the Institute of Actuaries, 105: 177-87. Boyle, Phelim P., and Edward S. Schwartz, 1977, Equilibrium Prices of Guarantees Under Equity - Linked Contracts, Journal of Risk and Insurance, 44: 639-60. Boyle, Phelim P., 1976, Rates of Return as Random Variables, Journal of Risk and Insurance, 43: 693-713. Boyle, Phelim P., 1980, Recent Models of the Term Structure of Interest Rates with Actuarial Applications, Transactions of the 21st International Congress of Actuaries, Topic 4, 95-104. Boyle, Phelim P., 1987, Perspective on Mortgage Default Insurance, in: I.B. MacNeill and G.J. Umphrey, eds., Actuarial Science, Advances in the Statistical Sciences, 6:185-99. Braun, H., 1986, Weak Convergence of Asset Processes with Stochastic Interest Returns, Scandinavian Actuarial Journal, 98-106. Brieman, L. 1968, Probability, Addison-Wesley, Reading, Mass. Buhlmann, Hans, 1987, Actuaries of the Third Kind? ASTIN Bulletin, 17: 137- 38. Campbell, R. A. 1980, The Demand for Life Insurance: An Application of the Ecnomics of Uncertainty, The Journal of Finance, 35: 1155-172. Cootner, P. H., 1974, The Random Character of Stock Market Prices, MIT Press, Cambridge, Massachusetts. Cox, J.C., S. A. Ross, and M. Rubinstein, 1979, Option Pricing: A Simplified Approach, Journal of Financial Economics, 7: 229-63. Cox, J. C., J. E. Ingersoll Jr, and S. A. Ross, 1981, A Reexamination of Traditional Hypotheses about the Term Structure of Interest Rates, The Journal of Finance, 36: 769- 99. Cummins, J. David, 1988, Risk Based Premiums for Insurance Guaranty Funds, Journal of Finance, 43: 823-39. De Vylder, F., 1977, Martingales and Ruin in a Dynamical Risk Process, Scandinavian Actuarial Journal, 217- 225. Doherty, Neil, and James. R. Garven, 1986, Price Regulation in Property-Liability Insurance: A Contingent-Claims Approach, Journal of Finance, December , 41: 1031-1050. Emanuel, D. C., J. M. Harrison, and A. J. Taylor, 1975, A Diffusion Approximation for the Ruin Function of a Risk Process with Compounding Assets, Scandinavian Actuarial Journal, 240-47. Fama, E. F., 1970, Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25: 383-417. Fischer, S., 1975, The Demand for Index Bonds, Journal of Political Economy, 83: 509-34. Gerber, H., 1979, An Introduction to Mathematical Risk Theory, Huebner Foundation Monograph No. 8, Wharton School, University of Pennsylvania, Richard D, Irwin Press Inc. Giaccotto, Carmelo, 1986, Stochastic Modeling of Interest Rates: Actuarial Vs Equilibrium Approach, Journal of Risk and Insurance, 3: 435-53. Gikhman, I. and A. V. Skorohod, 1969, Introduction to the Theory of Random Processes, W. B. Saunders Co., Philadelphia, Pennsylvania. Grossman, S. and R. Shiller, 1982, Consumption Correlatedness and Risk Measurement in Economies with Non-Traded Assets and Hetrogeneous Information, Journal of Financial Economics, 10: 195-210. Hogg, R. V. and S. A. Klugman, 1984, Loss Distributions, John Wiley Press. Ingersoll, J., 1976, A Theoretical and Empirical Investigation of Dual Purpose Funds: An Application of Contigent Claims Analysis, Journal of Financial Economics, 3: 83-123. Kraus, A. and S. A. Ross., 1982, The Determination of Fair Profits for the Property-Liability Insurance Firm, Journal of Finance, September, 37: 1015- 028. Langetieg, T. C., 1980, A Multivariate Model of the Term Structure, Journal of Finance, 35: 71- 97. Latham, Mark, 1986, Informational Efficiency and Information Subsets, Journal of Finance, 51, 39-52. Lintner, J., 1965, Security Prices, Risk and Maximal Gain From Diversification, Journal of Finance, 30: 587-615. Lintner, J., 1972, Equilibrium in a Random Walk and Lognormal Securities Market, Harvard Institute of Economic Research, Discussion Paper No. 235, Harvard University, Cambridge, Mass. Lucas, R. E., 1978, Asset Prices in an Exchange Economy, Econometrica, 46: 1429-445. Martin-Lof, A. 186, A Stochastic Theory of Life Insurance, Scandinavian Actuarial Journal, 65-81. Merton, R. C., 1969, Lifetime Portfolio Selection under Uncertainty: The Continuous Time Case, Review of Economics and Statistics, 41: 247-59. Merton, R.C., 1970, Optimum Consumption and Portfolio Rules in a Continuous Time Model, Journal of Economic Theory, 3: 373-413. Merton, R.C., 1974a, Theory of Rational Option Pricing, Bell Journal of Economics, 4: 141-183. Merton, R. C., 1974b, An Intertemporal Capital Asset Pricing Model, Econometrica, 41: 867-888. Merton, R. C., 1976, Option Pricing When Underlying Stock returns are Discontinuous, Journal of Financial Economics, 3: 125-144. Merton, R. C., 1977, An Analytic Derivation of the Cost of Deposit Insurance and Loan Guarantees: An Application of Modern Option Pricing Theory, Journal of Banking and Finance, 1: 3-11. Merton, R. C., and P. A. Samuelson, 1974, Fallacy of Lognormal Approximation of Optimal Decision Making Over Many Periods, Journal of Financial Economics, 1: 67-94. Panjer, H. H. and G. E. Willmot, 1989, Insurance Risk Models, Lecture Notes, Act SCI 431, University of Waterloo, Waterloo Ontario. Panjer, H. H., and D. R. Bellhouse, 1980, Stochastic Modelling of Interest Rates with Applications to Life Contingencies, Journal of Risk and Insurance, 47: 91-110. Paulson, A. S., and N. J. Faris, 1985, A Practical Approach to Measuring the Distribution of Total Annual Claims, in J. D. Cummins, ed., Strategic Planning and Modeling In Property-Liability Insurance, Norwell, MA: Kluwer Academic Publishers. Ross, S. A., 1987, Arbitrage and Martingales with Taxation, Journal of Political Economy, 95: 371-393. Ross, S. A., 1989, Information and Volatility: the No-Arbitrage Martingale Approach to Timing and Resolution Irrelevancy, Journal of Finance, 44: 1-18. Samuelson, P. A., 1965, Proof that Properly Anticipated Prices Fluctuate Randomly, Industrial Management Review, 6: 41-49. Samuelson, P. A., 1970, The Fundamental Approximation Theorem of Portfolio Analysis in Terms of Means, Variances and Higher Moments, Review of Economic Studies, 37: 537-42. Samuelson, P. A., 1972, Mathematics of Speculative Prices, Mathematical Topics in Economic Theory and Computation, R. H. Day and S. M. Robinson (Editors), Society for Industrial and Applied Mathematics, Philadelphia, 1- 42. Samuelson, P. A., 1974, Proof that Properly Discounted Present Values of Assets Vibrate Randomly, Bell Journal of Economics, 4: 369-74. Samuelson, P. A., and R. C. Merton, 1974, Generalized Mean-Variance Tradeoffs for Best Perturbation Corrections to Approximate Portfolio Decisions, Journal of Finance, 29: 27-40. Sharp, K., 1989, Mortgage Rate Insurance Pricing under an Interest Rate Diffusion with Drift, Journal of Risk and Insurance, 56: 34-45. Sharpe, W. F., 1976, Corporate Pension Funding Policy, Journal of Financial Economics, 3: 183-193. Smith, C. W., Jr., 1986, On the Convergence of Insurance and Finance Research, Journal of Risk and Insurance, 53: 693-717. Tapiero, Charles S, and L. Jacque, 1987, The Expected Cost of Ruin and Insurance Premiums in Mutual Insurance, Journal of Risk and Insurance, 54: 594-602. Wilkie, A. D., 1987, An Option Pricing Approach to Bonus Policy, Journal of the Institute of Actuaries, 114: 21-77; Discussion, 78-90. Yeh, J. J., 1973, Stochastic Processes and the Wiener Integral, Marcel Dekker, Inc., New York.

Senior Research Fellow, IC2 Institute; Joseph H. Blades Professor of Insurance; Professor Departments of Finance, Mathematics, and Management Science and Information Systems; and Research Scientist, Applied Research Laboratories, The University of Texas at Austin. Gus Wortham Chaired Professor of Risk Management and Insurance, Department of Finance, The University of Texas at Austin.

Printer friendly Cite/link Email Feedback | |

Author: | Brockett, Patrick L.; Witt, Robert C. |
---|---|

Publication: | Journal of Risk and Insurance |

Date: | Mar 1, 1991 |

Words: | 7225 |

Previous Article: | Economics of Insurance. |

Next Article: | The purchase of insurance by a risk-neutral firm for a risk-averse agent. |

Topics: |