Printer Friendly

Automatic Sampling with the Ratio-of-Uniforms Method.


There exists a large amount of literature on generation methods for standard continuous distributions; see, for example, Devroye [1986]. These algorithms are often especially designed for a particular distribution and tailored to the features of each density. However, in many situations the application of standard distributions is not adequate for a Monte Carlo simulation. Besides sheer brute force inversion (that is, tabulate the distribution function at many points), several universal methods for large classes of distributions have been developed to avoid the design of special algorithms for these cases. Some of these methods are either very slow (e.g., Devroye [1984]) or need a slow setup step and large tables (e.g., Ahrens and Kohrt [1981], Marsaglia and Tsang [1984], and Devroye [1986]).

Recently two more efficient methods have been proposed. The transformed density rejection by Gilks and Wild [1992] and Hormann [1995] is an acceptance/rejection technique that uses the concavity of the transformed density to generate a hat function automatically. The user only needs to provide the probability density function and perhaps the (approximate) location of the mode. A table method by Ahrens [1993] also is an acceptance/rejection method, but uses a piecewise constant hat. A region of immediate acceptance makes the algorithm fast when a large number of constant pieces is used. The tail region of the distribution is treated separately. In Ahrens [1995] the algorithm is modified to use a piecewise constant hat such that the area below each piece is the same. Thus, generation is simplified, but the algorithm requires more adjustments for the setup for each distribution.

The ratio-of-uniforms method introduced by Kinderman and Monahan [1977] is another flexible method that can be adjusted to a large variety of distributions. It has become a popular transformation method to generate nonuniform random variates, since it results in exact, efficient, fast, and easy-to-implement algorithms. Typically these algorithms have only a few lines of code (e.g., Barabesi [1993] gives a survey and examples of Fortran codes for several standard distributions). It is based on the following theorem.

THEOREM 1 [KINDERMAN AND MONAHAN 1977]. Let X be a random variable with density function f(x) = g(x)/[integral of] g(x)dx, where g(x) is a positive integrable function with support ([[x.sub.0], [x.sub.1]) not necessarily finite. If (V, U) is uniformly distributed in

(1) A = [A.sub.g] = {(v,u) : 0 [is less than] u [is less than or equal to] [square root of g(v/u)], [x.sub.0] [is less than] v/u [is less than] [x.sub.1]},

then X = V/U has probability density function f(x).

For sampling random points uniformly distributed in [A.sub.g] rejection from a convenient enveloping region [R.sub.g] is used. The basic form of the ratio-of-uniforms method is given by Algorithm rou.

Algorithm rou.

Require: function g(x) (prop. to density f(x)); enveloping region R

1. repeat

2. Generate random point (V, U) uniformly distributed in R.

3. X [left arrow] V/U.

4. until [U.sup.2] [is less than or equal to] g(X).

5. return X.

Usually the input in rou is prepared by the designer of the algorithm for each particular distribution. To reduce the number of evaluations of the density function in step 4, squeezes are used. It is obvious that the performance of this simple algorithm depends on the rejection constant, i.e., on the ratio |R|/|A|, where |R| denotes the area of region R. Kinderman and Monahan [1977] and others use rejection from the minimal bounding rectangle, i.e., the smallest possible rectangle {(v,u) : 0 [is less than or equal to] u [is less than or equal to] u*, v* [is less than or equal to] v [is less than or equal to] v*. This basic algorithm has been improved in several ways:(1) A tighter fitting enclosing region decreases the rejection constant. Possible choices are parallelograms (e.g., Cheng and Feast [1979]) or quadratic bounding curves (e.g., Leva [1992]). Often it is convenient to decompose into a countable set of nonoverlapping subregions ("composite ratio-of-uniforms method"; Robertson and Walls [1980] give a simple example). Dagpunar [1988] considers the possibility of an enclosing polygon.

In this article we develop a new algorithm that uses polygonal envelopes and squeezes. Random variates inside the squeeze are generated by mere inversion, and therefore in opposition to any other ratio-of-uniforms method, the expected number of uniform random numbers is less than two. For a large class of distributions, including all log-concave distributions, it is possible to construct envelope and squeeze automatically. Moreover, we show that the new algorithm is in some sense equivalent to transformed density rejection.

The new method has several advantages:

--Envelopes and squeezes are constructed automatically. Only the probability density function is necessary.

--The expected number of uniform random numbers is 1 + ??, where ?? [is greater than] 0 can be made arbitrarily small.

--For small ?? the method is close to inversion, and thus the resulting random variates can be used for variance reduction techniques. Moreover, the structure of the resulting random variates is similar to that of the underlying uniform random-number generator. Hence the nonuniform random variates inherit its quality properties.

--It avoids some possible defects in the quality of the resulting pseudorandom variates that have been reported for the ratio-of-uniforms method [Hormann 1994a; 1994b].

--It is the first ratio-of-uniforms method and the first implementation of transformed density rejection Where the expected number of uniform random numbers is less than two.

In Section 2 we give an outline of this new approach, and in Section 4 we discuss the problem of getting a proper envelope for the region R. Section 5 describes the algorithm in detail, and Section 6 reports the computational experiences we have had with the new algorithm, and we compare these with other algorithms. Section 3 shows that this algorithm is applicable for all T-concave densities, with T(x) = - 1/[square root of x]. Remarks on the quality of random numbers generated with the new algorithm are given in Section 7.


2.1 Enveloping Polygons

We are given a distribution with probability density function f(x) = g(x)/[integral of]g(x)dx with convex set [A.sub.g]. Notice that g must be continuous and bounded, since otherwise [A.sub.g] would not be convex. To simplify the development of our method we first assume unbounded support for g. (This restriction will be dropped later.)

For such a distribution it is easy to make an enveloping polygon: select a couple of points [c.sub.i], i = 0, ..., n, on the boundary of A and use the tangents at these points as edges of the enclosing polygon [P.sup.e] (see Figure 1). We denote the vertices of [P.sup.e] by [m.sub.i]. These are simply the intersection points of the tangents. Obviously, our choice of the construction points of the tangents has to result in a bounded polygon [P.sup.e]. The procedure even works if the tangents are not unique for a point (v,u), i.e., if g(x) is not differentiable in x = v/u. Furthermore it is very simple to construct squeezes: take the inside of the polygon [P.sup.s] with vertices [c.sub.i].


2.2 Sampling from the Enveloping Polygon

Notice that the origin (0,0) is always contained in the polygon [P.sup.e]. Moreover, every straight line through the origin corresponds to an x = v/u, and thus its intersection with A is always connected. Therefore we use [c.sub.0] = (0,0) for the first construction point and the v-axis as its tangent. To sample uniformly from the enclosing polygon we triangulate [P.sup.e] and [P.sup.s] by making segments [S.sub.i], i = 0, ..., n, at vertex [c.sub.0]. Figure 2 illustrates the situation. Segment [S.sub.i] has the vertices [c.sub.0], [c.sub.i], [m.sub.i], and [c.sub.i+1], where [c.sub.n+1] = [c.sub.0] for the last segment. Each segment is divided into the triangle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] inside the squeeze (dark shaded) and a triangle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] outside (light shaded). Notice that the segments [S.sub.0] and [S.sub.n] have only three vertices and no triangles [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].


To generate a random point uniformly distributed in [P.sup.e], we first have to sample from the discrete distribution with probability vector proportional to (|[S.sub.0]|, |[S.sub.1]|, ..., |S.sub.n]), to select a segment and further a triangle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] or [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. This can be done by inversion:

Algorithm get_segment.

Require: list of segments

1: Generate R ~ U(0, 1).

2: Find the smallest k, such that [[Sigma].sub.i[is less than or equal to]k] |[S.sub.i]| [is greater than or equal t] [is greater than or equal to] |[P.sup.e]|R.

3: if [[Sigma].sub.i[is less than or equal to]k] then


5: else


For step 2 indexed search (or guide tables) is an appropriate method [Chen and Asau 1974]; see also Devroye [1986, Section III.2.4].

Uniformly distributed points in a triangle ([v.sub.0], [v.sub.i], [v.sub.2]) can be generated by the following simple algorithm [Devroye 1986, p. 570]:

Algorithm triangle.

Require: triangle ([v.sub.0], [v.sub.1], [v.sub.2])

1: Generate [R.sub.1], [R.sub.2] ~ U(0, 1).

2: if [R.sub.1] [is less than] [R.sub.2] then swap [R.sub.1] and [R.sub.2].

3: return (1 - [R.sub.1])[v.sub.0] + ([R.sub.1] - [R.sub.2)[v.sub.1] + [R.sub.2][v.sub.2].

For sampling from [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] this algorithm can be much improved. Every point in such a triangle can immediately be accepted without evaluating the probability density function, and thus we are only interested in the ratio of the components. Since the triangle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] has vertex [c.sub.0] = (0, 0), we arrive at


where [c.sub.i,j] is the jth component of vertex [c.sub.i], and R = [R.sub.2]/[R.sub.1] again is a (0, 1)-uniform random variate by the ratio-of-uniforms theorem, since 0 [is less than or equal to] [R.sub.2] [is less than or equal to] [R.sub.1] [is less than or equal to] 1 [Kinderman and Monahan 1977]. Notice that we save one uniform random number in the domain [P.sup.s] by this method. Furthermore, we can reuse the random number R from routine get_segment by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] without risk. We find


Sampling from [P.sup.s] can then be seen as inversion from the cumulative distribution function defined by the boundary of the squeeze polygon. Thus, for a ratio |[P.sup.s]|/|[P.sup.e]| close to 1 we have almost inversion for generating random variates. The inversion method has two advantages and is thus favored by the simulation community (see Bratley et al. [1983]): (1) the structure of the generator is simple and can easily be investigated (see Section 7) and (2) these random variates can be used for variance reduction techniques.

2.3 Expected Number of Uniform Random Numbers

Let ?? = |[P.sup.e]\[P.sup.s]|/|[P.sup.e]| = 1 - |[P.sup.s]|/|[P.sup.e]|. Then the expected number of uniform random numbers for generating one ratio v/u is given by (1 - ??) + 2?? = 1 + ??. Since we have to reject this ratio if (v, u) [is not element of] A and A [contains or equal to] [P.sup.s], we find for the expected number of uniform random numbers per generated nonuniform variate E [is less than or equal to] (1 + ??)/(1 - ??). Notice that by a proper choice of the construction points, ?? can be made arbitrarily small.

2.4 Bounded Domain for g

If (x.sub.0) [is greater than] [infinity] or [x.sub.1] [is less than] [infinity] than the situation is nearly the same. We have to distinguish between two cases:

(1) f([x.sub.i]) [is greater than] 0 and f'([x.sub.i]) exists for the limit point [x.sub.1]. We then use [x.sub.i] as construction point, and the respective triangular segment [S.sub.0] or [S.sub.n] is not necessary.

(2) Otherwise we can restrict the triangular segment [S.sub.0] or [S.sub.n], i.e., we use the tangent line v - [x.sub.i]u = 0 at vertex [c.sub.0] = (0,0), instead of the v-axis. Notice that we then have different tangent lines at [c.sub.0] for [S.sub.0] and [S.sub.n].

2.5 Adding a Construction Point

To add a new point for a given ratio x = v/u we need ([c.sub.v], [c.sub.0]) on the "outer boundary" of A and the tangent line of A at this point. These are given by the positive root of [u.sup.2] = g(x) and the total differential of [u.sup.2] - g(v/u); hence we have

(4) boundary: [c.sub.u] = [square root of g(x)], [c.sub.u] = x[c.sub.u];

tangent: [a.sub.v]v + [a.sub.u]u = [a.sub.c] = [a.sub.v][c.sub.v] + [a.sub.u][c.sub.u],

where [a.sub.u] = 2u + g'(x) x/u and [a.sub.v] = -g'(x)/u


3.1 Transformed Density Rejection

One of the most efficient universal methods is transformed density rejection, introduced in Devroye [1986] and under a different name in Gilks and Wild [1992], and generalized in Hormann [1995]. This acceptance/rejection technique uses the concavity of the transformed density to generate a hat function and squeezes automatically by means of tangents and secants. The user only needs to provide the density function and perhaps the (approximate) location of the mode. It can be utilized for any density f where a strictly increasing, differentiable transformation T exists, such that T(f(x)) is concave (see Hormann [1995] for details). Such a density is called T-concave; log-concave densities are an example with T(x) = log(x). Figure 3 illustrates the situation for the standard normal distribution and the transformation T(x) = log(x). The left-hand side shows the transformed density with three tangents. The right-hand side shows the density function with the resulting hat. Squeezes are drawn as dashed lines. Evans and Swartz [1998] have shown that this technique is even suitable for arbitrary densities provided that the inflection points of the transformed density are known.


3.2 Densities with Convex Region

Stadlober [1989] and Dieter [1989] have clarified the relationship of the ratio-of-uniforms method to the ordinary acceptance/rejection method. But there is also a deeper connection to the transformed density rejection, which gives us a useful characterization for densities with convex region [A.sub.g]. We first provide a proof of Theorem 1.

PROOF OF THEOREM 1. Consider the transformation

(5) R x (0, [infinity]) [right arrow] R x (0, [infinity]), (V, U) ?? (X, Y) = (V/U, [U.sup.2]).

Since the Jacobian of this transformation is 2, the joint density probability function of X and Y is given by w(x, y) = 1/(2|A|), if 0 [is less than] y [is less than or equal to] g(x), and w(x, y) = 0 otherwise. Thus X has marginal density [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Consequently |A| = 1/2 [integral of] g(x)dx and [w.sub.1](x) = f(x). Therefore X = V/U has probability density function f(x). []

Transformation (5) maps [A.sub.g] one-to-one onto [B.sub.g] = {(x,y): 0 [is less than] y [is less than or equal to] g(x), [x.sub.0] [is less than] x [is less than] [x.sub.1]}, i.e., the set of points between the graph of g(x) and the x-axis. Moreover, the "outer boundary" of [A.sub.g], {(v,u) : [u.sup.2] = g(v/u), u [is greater than] 0, [x.sub.0] [is less than] v/u [is less than] [x.sub.1]}, is mapped onto the graph of g(x).

THEOREM 2. [A.sub.g] is convex if and only if g(x) is T-concave with transformation T(x) = -1/[square root of x].

PROOF. Since T(x) = -1/[square root of x] is strictly monotonically increasing, the transformation (X, Y) [right arrow] (X, T(Y)) maps [B.sub.g] one-to-one onto [C.sub.g] = {(x, y): y [is less than or equal to] T(g(x)), [x.sub.0] [is less than] x [is less than] [x.sub.1]}, i.e., the region below the transformed density. Hence by T([u.sup.2]) = -1/u,

(6) R x (0, [infinity]) [right arrow] R x (-[infinity], 0) (V,U) ?? (X,Y) = (V/U, -1/U)

maps [A.sub.g] one-to-one onto [C.sub.g]. Notice that g is T-concave if and only if [C.sub.g] is convex. Thus, it remains to show that [A.sub.g] is convex if and only if [A.sub.g] is convex, and consequently that straight lines remain straight lines under transformation (6). Let ax + by = d be a straight line in [C.sub.g]. Then a(v/u) - b/u = d and av - du = b, i.e., a straight line in [A.sub.g]. Analogously we find for a straight line av + bu = d in [A.sub.g] the line ax + dy = -bin [C.sub.g]. []

Remark 1. By Theorem 2 the new universal ratio-of-uniforms method is in some sense equivalent to transformed density rejection. It is a different method to generate points uniformly distributed in the region below the hat function. But in opposition to the new method, transformed density rejection always needs at least two uniform random numbers. A similar approach for the transform density rejection, i.e., decomposing the hat function into the squeeze (region of immediate acceptance) and the region between squeeze and hat, does not work well. Sampling from the second part is very awkward and prone to numerical errors.(2)

Since every log-concave density is T-concave with T(x) = -1/[square root of x] [Hormann 1995], our algorithm can be applied to a large class of distributions. Examples are given in Table I. The given conditions on the parameters imply T-concavity on the support of the densities. However, the densities are T-concave for a wider range of their parameters on a subset of their support, e.g., the density of the gamma distribution with b = 1 is T-concave for all a [is greater than] 0 and x [is greater than or equal to] -1 + [square root of 2 - 2a] + a [is less than or equal to] 1/2.
Table I. T-Concave Densities (Normalization Constants Omitted)

Distribution         Density

                     NOT REPRODUCIBLE IN ASCII]

Log-normal           1/x exp [(-ln(x - [Mu]).sup.2]/

Exponential          [[Lambda]e.sup.-[Lambda]x]

Gamma                [x.sup.a-1][e.sup.-bx]

Beta                 [x.sup.a-1][(1 - x).sup.b-1]

Weibull              [x.sup.a-1] exp([-x.sup.a])

Perks                1/([e.sup.x] + [e.sup.-x] + a)

Gen. inv. Gaussian   [x.sup.a-1] exp(-bx - b*/x)

Student's t          [(1 + ([x.sup.2]/a)).sup.-(a+1)/2]

Pearson VI           [x.sup.a-1]/[(1 + x).sup.a+b]

Cauchy               1/(1 + [x.sup.2])

Planck               [x.sup.a]/([e.sub.x] - 1)

Burr                 [x.sup.a-1]/[(1 +[x.sup.a]).sup.b]

Snedecor's F         [x.sup.m/2-1]/[(1 + m/nx).sup.(m+n)/2]

Distribution             Support

Normal                      R

Log-normal           [0, [infinity])

Exponential          [0, [infinity])

Gamma                [0, [infinity])

Beta                 [0, 1]

Weibull              [0, [infinity])

Perks                       R

Gen. inv. Gaussian   [0, [infinity])

Student's t                 R

Pearson VI                  R

Cauchy                      R

Planck               [0, [infinity])

Burr                 [0, [infinity])

Snedecor's F         [0, [infinity])

Distribution                      T-concave for


Log-normal           [Sigma] [is less than or equal to]
                     [square root of 2]

Exponential          [Lambda] > 0

Gamma                a [is greater than or equal to] 1, b > 0

Beta                 a, b [is greater than or equal to] 1

Weibull              a [is greater than or equal to] 1

Perks                a [is greater than or equal to] -2

Gen. inv. Gaussian   a [is greater than or equal to] 1,
                     b,b* > 0

Student's t          a [is greater than or equal to] 1

Pearson VI           a,b [is greater than or equal to] 1


Planck               a [is greater than or equal to] 1

Burr                 a [is greater than or equal to] 1,
                     b [is greater than or equal to] 2

Snedecor's F         m,n [is greater than or equal to] 2


The performance of the new algorithm depends on a small ratio of ?? = |[P.sup.e]\[P.sup.s]|/|[P.sup.e]|, and thus on the choice of the constructions points for the tangents of the enveloping polygon. There are three possible solutions: (1) simply choose equidistributed points, (2) use an adaptive method, or (3) use optimal points. It is obvious that setup time is increasing and marginal generation time is decreasing from (1) to (3) for a given number of construction points.

4.1 Equidistributed Points

The simplest method is to choose points [x.sub.1], ..., [x.sub.n], with equidistributed angles:

(7) [x.sub.i] = tan (-[Pi]/2 + i[Pi]/(n + 1)) i = 1, ..., n.

If the density function has bounded domain, (7) has to be modified to

(8) [x.sub.i] = tan([[Theta].sub.l] + i([[Theta].sub.r] - [[Theta].sub.l])/(n + 1)) i = 1, ..., n

where tan([[Theta].sub.l]) and tan([[Theta].sub.r]) are the left and right boundary of the domain (see also Section 2). If the distribution has a mode m [is not equal to] 0 use the points [x.sub.i] + m (and shift the domain of the density function by -m). For [x.sub.i] close to 0 a point is approximately the arithmetic mean of its neighbors; for very large points a point is approximately the harmonic mean of its neighbors. Numerical simulations with several density functions have shown that this is an acceptable good choice for construction points for several distributions where the ratio of length and width of the minimal bounding rectangle is not too far from one.

To get an idea about the relationship between [??.sub.n] and the number of construction points n, we look at the following special case. Assume 0 is the mode of a T-concave monotonically decreasing density f with domain [0, [infinity]). Let (a, b) be the right upper vertex of its minimal bounding rectangle R, i.e., a = [sup.sub.x [is greater than or equal to] 0] x [square root of f(x)] and b = [square root of f(0)] = [max.sub.x [is greater than or equal to] 0] [square root of f(x)]. Furthermore assume that [x.sub.0] = 0 and that the slope of the tangent line at the mode is 0 (such a tangent always exists). The region between enveloping polygon and squeeze consists of n triangles, each of which with base line [c.sub.i] (consisting of an edge of the squeezing polygon) and base angles [[Alpha].sub.i] and [[Beta].sub.i], respectively. Due to the convexity of the region A, we find [Sigma][c.sub.i] [is less than or equal to] 2a + b, [Sigma]([[Alpha].sub.i] + [[Beta].sub.i]) [is less than or equal to] [Pi], and [[Alpha].sub.i] + [[Beta].sub.i] [is less than] [Pi]. Moreover, there is at most one triangle not completely inside R. For the areas of these triangles we find


The total sum of areas will become as large as possible, when the base angles in all, but one triangles become zero, i.e., the areas become zero. Figure 4 shows the limit case. Using (9) we find




4.2 Adaptive Rejection Sampling

Gilks and Wild [1992] introduce the ingenious concept of adaptive rejection sampling for the problem of finding appropriate construction points for the tangents for the transformed density rejection method. Adopted to our situation it works in the following way. Start with (at least) two points on both sides of the mode and sample points from the enveloping polygon [P.sup.e]. Add a new construction point at x = v/u whenever a point (v, u) falls into [P.sup.e]\[P.sup.s] until a certain stopping criterion is fulfilled, e.g., the maximal number of construction points or the aimed ratio |[P.sup.s]|/|[P.sup.e]| is reached. To ensure that the starting polygon [P.sup.e] is bounded, a construction point at (or at least close to) the mode should be used as a third starting point.

Sampling a point in the domain [P.sup.e]\[P.sup.s] is much more expensive than sampling from the squeeze region. Firstly, the generation of a random point requires more random numbers and multiplications; secondly, we have to evaluate the density and check the acceptance condition. Thus, we have to minimize the ratio ?? = |[P.sup.e]\[P.sup.e]|/|[P.sup.e]|, which is done perfectly well by adaptive rejection sample, since by this method the region is automatically approximated by the envelope and squeeze polygon. The probability for adding a new point in a segment [S.sub.i] depends on the ratio [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], i.e., from the probability to fall into [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Hence, the adaptive algorithm tends to insert a new construction point where it is "more necessary."

Obviously, the ratio [??.sub.n] is a random variable that converges to 0 almost surely when the number of construction points n tends to infinity. A simple consideration gives [??.sub.n], = O([n.sup.-2]) [Leydold and Hormann 1998]. Figure 5 shows the result of a simulation for the standard normal distribution with (nonoptimal) starting points at x = [+ or -] 0.4 (50,000 samples). [??.sub.n] is plotted against the number n of construction points. The range of [??.sub.n] is given by the light shaded area; 90%- and 50%-percentiles are given by dark shaded areas, median by the solid line.


We have run simulations with other distributions and starting values and have made the observation that convergence is even faster for other (nonnormal) distributions. However, analytical investigations are interesting. Upper bounds for the expected value of [??.sub.n] are an open problem.

4.3 Optimal Construction Points

By Theorem 2 the area between hat and squeeze of the transformed density rejection method is mapped one-to-one and onto the region [P.sup.e]\[P.sup.s]. Thus, we can use methods for computing optimal construction points for transformed density rejection for finding optimal envelopes for the new algorithm. If only three construction points are used, see Hormann [1995]. For the case where more points are required, Derflinger and Hormann(3) describe a very efficient method. However some modification are necessary. Improvements over adaptive rejection sampling are rather small and can be seen in Figure 5 (The lower boundary of the range gives a good estimate for the optimal choice of construction points.).


Algorithm arou consists of three main parts:

(1) Construct the starting enveloping polygon [P.sup.e] and squeeze polygon [P.sup.s] in routine arou_start. Here we have to take care about a possibly bounded domain and the two cases described in Section 2. The starting points must be provided (e.g., by using equidistributed points as described in Section 4).

(2) Sample from the given distribution in routine arou_sample.

(3) Add a new construction point with routine arou_add whenever we fall into [P.sup.e]\[P.sup.s]

We store the envelope into a list of segments (Table II). When using this algorithm we first have to initialize the generator by calling arou_start. Then sampling can be done by calling arou_sample.
Algorithm arou_start.

Require: density f(x), derivative f' (x);
  domain ([x.sub.0],[x.sub.k]), construction points [x.sub.1],
  ..., [x.sub.k-1].
  1: [c.sub.0] [left arrow] (0,0); [c.sub.k + 1] [left arrow]
     (0,0); /* origin */
  2: [a.sub.0] [left arrow] (cos(arctan([x.sub.k])),
     -sin(arctan([x.sub.0])),0). /* tangent line for [S.sub.o] */
  3: [a.sub.k + 1] [left arrow] (cos(arctan([x.sub.0]),
     -sin(arctan([x.sub.0]),0) /* tangent line for [S.sub.k] */.
  4: for i = 1, ..., k do/* all construction points [x.sub.i] */
  5:   if f([x.sub.i]) [is greater than] 0 and [exists]f'
       ([x.sub.i]) then
  6:      [c.sub.i,2] [left arrow] [square root of
          f([x.sub.i])]; [c.sub.i,1] [left arrow] [x.sub.i]
  7:      [a.sub.i,v] [left arrow] -f'([x.sub.i])/[c.sub.i,2];
          [a.sub.i,u] [left arrow] 2[c.sub.i,2] + [x.sub.i]f'
          ([x.sub.i])/[c.sub.i,2]; [a.sub.i,c] [left arrow]
              [c.sub.i,1][a.sub.i,v] + [c.sub.i,2][a.sub.i,u].
  8:      add [S.sub.i] to list of segraents.
       / * else [x.sub.i] cannot be used as construction point */
  9: for all segments [S.sub.i] do
  10:   insert [c.sub.i + 1] and [a.sub.i + 1]. /* already
        stored in next segment in list */
  11:   compute [m.sub.i].
  13: check if polygon [P.sup.e] is bounded.
  14: return list of segments.

Algorithm arou_sample.

Require: density f(x), list of segments [S.sub.i].
  1: loop
  2:   generate R ~ U(0, 1).
  3:   find smallest i such that [MATHEMATICAL EXPRESSION NOT
       REPRODUCIBLE IN ASCII]. /* use guide table */
       then /* inside squeeze, [MATHEMATICAL EXPRESSION NOT
          ASCII]. /* Eq. (3) */
  7:   else /* outside squeeze, [MATHEMATICAL EXPRESSION NOT
  9:      generate [R.sub.2] ~ U(0, 1).
  10:     if [R.sub.1] [is greater than] [R.sub.2] then swap
          [R.sub.1], [R.sub.2].
  11:     [R.sub.3] [left arrow] 1 - [R.sub.2], [R.sub.2]
          [left arrow] [R.sub.2] - [R.sub.1].
  12:     U [left arrow] [c.sub.i,2] [R.sub.1] + [c.sub.i+1,2]
          [R.sub.2] + [m.sub.i,2] [R.sub.3].
  13:     X [left arrow] ([c.sub.i,1] [R.sub.1] + [c.sub.i+1,1]
          [R.sub.2] + [m.sub.i,1] [R.sub.3])/U.
  14:     if number of segments < maximum then
  15:      call arou_add with X, [S.sub.i].
  16:  if [U.sub.2] [is less than or equal to] f(X) then
  17:     return X.

Algorithm  arou_add.

Require: density f(x), derivative f'(x); new construction
point [x.sub.n]; segment [S.sub.r].
  1: if f([x.sub.n]) = 0 or [not exists] f' ([x.sub.n])
     then /* cannot add this point */
  2:    return
  3: [c.sub.n,2] [left arrow] [square root of f ([x.sub.n];
     [c.sub.n,1] [left arrow] [x.sub.n] [c.sub.n,2].
  4: [a.sub.n,v] [left arrow] - f'([x.sub.n])/[c.sub.n,2];
     [a.sub.n,u] [left arrow] 2[c.sub.n,2] + [x.sub.n] f'
     ([x.sub.n])/[c.sub.n,2]; [a.sub.n,c] [left arrow]
     [c.sub.n,1] [a.sub.n,v] + [c.sub.n,2] [a.sub.n,u].
  5: insert [S.sub.n] into list of segments. /* Take care about
     [c.sub.i+1] and [a.sub.i+1] */
  6: remove old segment [S.sub.r] from list.
  7: compute [m.sub.n]
     IN ASCII].
  9: for all segments [S.sub.i] do
       IN ASCII]
  11: return new list of segments
Table II. Object segment

Parameter                     Variable

left construction point       [c.sub.i]

right construction point      [c.sub.i+1]

tangent at left point         [a.sub.i]

tangent at right point        [a.sub.i+1]

intersection point            [m.sub.i]

area inside/outside squeeze   [MATHEMATICAL EXPRESSION NOT
                              REPRODUCIBLE IN ASCII]

accumulated area              [MATHEMATICAL EXPRESSION NOT
                              REPRODUCIBLE IN ASCII]

Parameter                     Definition/Remark

left construction point

right construction point      pointer, stored in next segment

tangent at left point         ([a.sub.v], [a.sub.u], [a.sub.c]),
                              see (4)

tangent at right point        pointer, stored in next segment

intersection point

area inside/outside squeeze   [MATHEMATICAL EXPRESSION NOT
                              REPRODUCIBLE IN ASCII]

accumulated area              [Sigma]j [is less than or equal to]
                              i\[S.sub.j]\, for fast inversion

To implement this algorithm, a linked list of segments is necessary. Whenever [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are (re-)calculated, a guide table has to be made. Using linear search might be a good method for finding [S.sub.i] when only a few random variates are sampled.

Special care is necessary when [m.sub.i] is computed in arou_start and arou_add. There are three possible cases for numerical problems when solving the corresponding linear equation:

(1) The vertices [c.sub.i] and [c.sub.i+1] are very close, and (consequently) |[S.sub.i]| is very small. Here we simply reject [c.sub.i+1] as new construction point.

(2) [c.sub.i] and [c.sub.i+1] are very close to [c.sub.0] = (0, 0). Again |[S.sub.i]l is very small.

(3) The boundary of A between [c.sub.i] and [c.sub.i+1] is almost a straight line, and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is (almost) 0. In this case we set [m.sub.i] = 1/2([c.sub.i] + [c.sub.i+1]).

A possible way to define "very small" is to compare such numbers with the smallest positive [Epsilon] with (M + [Epsilon]) [is not equal to] M in the used programming language. M denotes the magnitude of the maximum of the density function. (In ANSI C for M = 1, [Epsilon] is defined by the macro DBL_EPSILON.)

It is important to check whether [m.sub.i] is on the outer side of the secant through [c.sub.i] and [c.sub.i+1]. This condition is violated in arou_start when the polygon [P.sup.e] is unbounded. It may be violated in arou_start and arou_add when A is not convex.


A version of algorithm arou is coded in C and available by email request from the author. We have compared it to two other universal methods: transformed density rejection with T(x)= -1/[square root of x] (tdr) and the table method (tabl) by Ahrens [1993]. (However we have modified split B by replacing the recursive search by x = tan((arctan([x.sub.1] + arctan([x.sub.2]))/2), a mean value similar to Eq. (8).) Notice that this method is only applicable for densities with bounded support. Thus, we have to cut unbounded domains (we used -[10.sup.50] and [10.sup.50], respectively). The main goal for the implementations of all three algorithms is to get a flexible and robust program. Moreover, for small [Rho], generation should be close to inversion. Thus, linked lists of structures have been used. Constructions like storing all data in a single array and using sophisticated indices to find these again (as described in Ahrens [1993]) have been avoided. For the underlying uniform random-number generator we have used the library prng-2.2 [Lendl 1997]. We used generator CMRG by L'Ecuyer [1996], a combined multiple recursive random-number generator with a long period (generation time 0.31 [micro]s).

The timings have been performed on a PC (AMD K2 400MHz, Linux 2.0.36, gcc version 2.95.1). We started with 30 construction points, using the "equidistribution rule" for arou and tdr, and "equiarea rule with splitting" for tabl (see Ahrens [1993] for details). Tables III and IV show the result for some distributions. We then continued with adaptive rejection sampling to get more construction points until ?? [is less than or equal to] 0.01 (Zaman [1996] has suggested this procedure for the table method). Table V shows the number of the resulting segments and intervals, respectively, and the marginal generation times for the generator, when no more construction points are added.

Table III. ?? and Average Number of Uniform Random Numbers for 30 Fixed Construction Points Using "Equidistribution Rule" (arou, tdr) and "Equiarea Rule with Splitting" (tabl), Respectively
                   arou            tdr             tabl

               ??      #urn    ??      #urn    ??      #urn

Normal        0.021   1.029   0.021   2.014   0.192   1.334
Student(2)    0.022   1.028   0.022   2.013   0.561   2.475
Cauchy        0.067   1.068   0.067   2.002   0.788   5.231
Gamma(10)     0.094   1.137   0.094   2.079   0.207   1.362
Beta(10,20)   0.022   1.029   0.022   2.016   0.160   1.265

Table IV. Setup Time ([t.sub.s]) and Average Marginal Generation Time ([t.sub.g]) (Sample Size [10.sup.6]) for 30 Construction Points (see Table III)

               [t.sub.s]    [t.sub.g]
              ([micro]s)   ([micro]s)

Normal           182          0.77
Student(2)       230          0.79
Cauchy           178          0.81
Gamma(10)        220          0.92
Beta(10,20)      235          0.78


               [t.sub.s]    [t.sub.g]
              ([micro]s)   ([micro]s)

Normal           261          1.53
Student(2)       303          1.55
Cauchy           251          1.55
Gamma(10)        295          1.68
Beta(10,20)      312          1.54


               [t.sub.s]    [t.sub.g]
              ([micro]s)   ([micro]s)

Normal           110          1.03
Student(2)       124          2.52
Cauchy            91          3.74
Gamma(10)        127          1.16
Beta(10,20)      130          1.10

Table V. Adding Construction Points by Adaptive Rejection Sampling until p [is less than or equal to] 0.01. Average Marginal Generation Time (when p = 0.01) and 90%-Percentile for Respective Number of Segments and Intervals (Sample Size [10.sup.5])

              ([micro]s)   Segments

Normal           0.75       (40,46)
Student(2)       0.76       (37,44)
Cauchy           0.75       (34,40)
Gamma(10)        0.76       (49,56)
Beta(10,20)      0.76       (44,50)


              ([micro]s)   Intervals

Normal           1.51       (41,48)
Student(2)       1.52       (38,46)
Cauchy           1.52       (35,43)
Gamma(10)        1.50       (49,57)
Beta(10,20)      1.50       (45,52)


              ([micro]s)   Intervals

Normal           0.78       (573, 598)
Student(2)       0.80      (1057,1093)
Cauchy           0.81      (1559,1601)
Gamma(10)        0.79       (562, 587)
Beta(10,20)      0.79       (540, 564)

As expected, Tables III and IV show that method arou is superior to tdr. It requires fewer uniform random numbers. Moreover, since it requires less computations its setup time is shorter, and the marginal generation is much faster. Table IV demonstrates the advantage of the better fitting hat of method arou compared to tabl. A considerably lower number of segments is required. This results in a faster setup step for a fixed small ??. This observation is supported by the theoretical result that ?? is O(1/[n.sup.2]) for arou and tdr, but O(1/n) for tabl. The average generation times that include setup time and rebuilding the guide tables for sample size [10.sup.5] have been found about the same as the marginal generation time for arou and tdr, but are considerable larger for tabl (more than 100% larger for Cauchy distribution).


The new algorithm is a composition method, similar to the acceptance-complement method (see Devroye [1986, Section II.5]). We have f(x) = (1 - ??)[g.sub.s](x) + ??[g.sub.o](x), where [g.sub.s](x) :is the distribution defined by the squeeze region and where [g.sub.o](x) = f(x) - [g.sub.s](x). By Theorem I the algorithm is exact, i.e., the generated random variates have the required distribution. However, defects in underlying uniform random-number generators may result in poor quality of the nonuniform random variate. Moreover, the transformation into the nonuniform random variate itself may cause further deficiencies.

Although there is only little literature on this topic, the ratio-of-uniforms method in combination with any linear congruental generator (LCG) was reported to have defects [Hoqrmann 1994a; 1994b]. Due to the lattice structure of random pairs generated by an LCG there is always a hole without a point with probability of order 1 [square root of M], where M is the modulus of the LCG.

Random variates generated by the inversion inherit the structure of the underlying uniform random numbers and consequently their quality. We consider this as a great advantage of this method, since
   ... generators whose structural properties are well understood and
   precisely described may look less random, but those that are more
   complicated and less understood are not necessarily better. They may hide
   strong correlations or other important defects ... One should avoid
   generators without convincing theoretical support [L'Ecuyer 1998].

This statement by L'Ecuyer on building uniform random-number generator is also valid for nonuniform distributions. Other methods may have some hidden inferences, which make a prediction of the quality of the resulting nonuniform random numbers impossible [Leydold et al. 2000].

Notice that a random variate with density [g.sub.s](x) is generated by inversion. Thus, as ratio ?? tends to 0, most of the random variates are generated by inversion by the new algorithm. As an immediate consequence for small ?? the new generator avoids the defects of the basic ratio-of-uniforms method. Figure 6 shows scatter plots of all overlapping tuples ([u.sub.0], [u.sub.1]), ([u.sub.1], [u.sub.2]), ([u.sub.2], [u.sub.3]), ... using the "baby" generator [u.sub.n+1] = 869[u.sub.n] + 1 mod 1024. Figure 6(a) shows the underlying generator. Figures 6(b)-6(f) show the tuples ([Phi]([u.sub.0]), [Phi]([u.sub.1])), ([Phi]([u.sub.1]), [Phi]([u.sub.2])), ([Phi]([u.sub.2]), [Phi]([u.sub.3])), ... for different number of construction points using the equidistribution method ([Phi] denotes the cumulative distribution function of the standard normal distribution).


We have made an empirical investigation using M-tuple tests [Good 1953; Marsaglia 1985] in the setup of Leydold et al. [2000] with the standard normal distribution and various numbers of construction points. We have used a linear congruential generator fish by Fishman and Moore [1986], an explicit inversive congruential generator [Eichenauer-Herrmann 1993], and a twisted GFSR generator (tt800 by Matsumoto and Kurita [1994]); at last the infamous randu (again an LCG) as an example of a generator with bad lattice structure (see Park and Miller [1988]). These tests have demonstrated that for small ratio ??, the quality of the normal generators is strongly correlated with the quality of the underlying uniform random-number generator. Especially, using randu results in a normal generator of bad quality. Notice, however, that this correlation does not exist, if ?? is not close to 0. Indeed, using only 2 or 4 construction points results in a normal generator which might be better (e.g., fish in our tests) or worse (e.g., randu) than the underlying generator.


8.1 Nonconvex region

The algorithm can be modified to work with nonconvex regions [A.sub.f]. Adapting the idea of Evans and Swartz [1998] we have to partition [A.sub.f] into segments using the inflection points of the transformed density with transformation T(x) = - 1/[square root of x]. In each segment of [A.sub.f] where T(f(x)) is not concave but convex, we have to use secants for the boundary of the enveloping polygon [P.sup.e] and tangents for the squeeze [P.sup.s] (see Figure 7). Notice that the squeeze region in such a segment is a quadrangle [c.sub.0][c.sub.i][m.sub.i][c.sub.i+1] and has to be triangulated. The changes of algorithm arou are straightforward: (1) include [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] into object 2; (2) compute [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] instead of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all nonconvex segments of A; (3) in arou_sample, when we generate a point inside the squeeze polygon of a nonconvex segment, we first have to decide by means of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] which triangle (left of right) has to be used.


8.2 Multivariate Distributions

Wakefield et al. [1991] and Stefanescu and Vaduva [1987] have generalized the ratio-of-uniforms method to multivariate distributions. Both use rejection from an enclosing multidimensional rectangle. However, the acceptance probability decreases very fast for higher dimension. For multivariate normal distributions in four dimensions it is below 1%. Using polyhedral envelopes similar to Leydold and Hormann [1998] or Leydold [1998] is possible and increases the acceptance probability. However this requires some additional research.



The author wishes to thank Hannes Leeb for helpful discussions on the quality of random-number generators.

(1) Moreover the method has been extended: Wakefield et al. [1991] replace the function q(u) = [u.sup.2] by a more general strictly increasing differentiable function q(u). Stadlober [1989; 1990] give a modification for discrete distributions. Jones and Lunn [1996] embed this method into a "general random variate generation framework." Wakefield et al. [1991] and Stefanescu and Vaduva [1987] apply this method to the generation of multivariate distributions.

(2) private communication, Hormann, W. 1999.

(3) Private communication (1998). The optimal selection of hat functions for rejection algorithms. In preparation.


AHRENS, J. H. 1993. Sampling from general distributions by suboptimal division of domains. Grazer Math. Berichte 319.

AHRENS, J. H. 1995. A one-table method for sampling from continuous and discrete distributions. Computing 54, 2, 127-146.

AHRENS, J. H. AND KOHRT, K. D. 1981. Computer methods for efficient sampling from largely arbitrary statistical distributions. Computing 26, 1, 19-31.

BARABESI, L. 1993. Random variate generation by using the ratio-of-uniforms method. Tech. Rep. 1-1993. Dipartimento di Metodi Quantitativi, Universita degli Studi di Siena, Siena, Italia.

BRATLEY, P., Fox, B. L., AND SCHRAGE, E. L. 1983. A Guide to Simulation. Springer-Verlag, New York, NY.

CHEN, H. C. AND ASAU, Y. 1974. On generating random variates from an empirical distribution. AIIE Trans. 6, 163-166.

CHENG, R. C. H. AND FEAST, G. M. 1979. Some simple gamma variate generators. Appl. Stat. 28, 3, 290-295.

DAGPUNAR, J. 1988. Principles of Random Variate Generation. Clarendon Oxford Science, Oxford, UK.

DEVROYE, L. 1984. A simple algorithm for generating random variates with a log-concave density. Computing 33, 3-4, 247-257.

DEVROYE, L. 1986. Non-Uniform Random Variate Generation. Springer-Verlag, New York, NY.

DIETER, U. 1989. Mathematical aspects of various methods for sampling from classical distributions. In Proceedings of the Winter Conference on Simulation (WSC '89, Washington, D.C., Dec. 4-6, 1989), E. A. MacNair, K. J. Musselman, and P. Heidelberger, Eds. ACM Press, New York, NY, 477-483.

EICHENAUER-HERMANN, J. 1993. Statistical independence of a new class of inversice congruential pseudorandom numbers. Math. Comput. 60, 201, 375-384.

EVANS, M. AND SCHWARTZ, T. 1998. Random variable generation using concavity properties of transformed densities. J. Comput. Graph. Stat. 7, 4, 514-528.

FISHMAN, G. A AND MOORE, L. R 1986. An exhaustive analysis of multiplicative congruential random numbergenerators with modulus [2.sup.31] - 1. SIAM J. Sci. Stat. Comput. 7, 1 (Jan. 1986), 24-45. See erratum, ibid. p. 1058

GILKS, W. R. AND WILD, P. 1992. Adaptive rejection sampling for Gibbs sampling. Appl. Stat. 41, 2, 337-348.

GOOD, I. J. 1953. The serial test for sampling numbers and other tests for randomness. Proc. Cambridge Philos. Soc. 49, 276-284.

HORMANN, W. 1994a. A note on the quality of random variates generated by the ratio of uniforms method. ACM Trans. Model. Comput. Simul. 4, 1 (Jan. 1994), 96-106.

HORMANN, W. 1994b. The quality of non-uniform random numbers. In Proceedings of the Conference on Operations Research, H. Dyckhoff, U. Derings, M. Salomon, and H. C. Tijms, Eds. Springer-Verlag, Vienna, Austria, 329-335.

HORMANN, W. 1995. A rejection technique for sampling from T-concave distributions. ACM Trans. Math. Softw. 21, 2 (June), 182-193.

JONES, M. C. AND LUNN, A. D. 1996. Transformations and random variate generation: Generalised ratio-of-uniforms methods. J. Stat. Comput. Simul. 55, 1/2, 49-55.

KINDERMAN, A. J. AND MONAHAN, F. J. 1977. Computer generation of random variables using the ratio of uniform deviates. ACM Trans. Math. Softw. 3, 3 (Sept.), 257-260.

L'ECUYER, P. 1996. Combined multiple recursive random number generators. Oper. Res. 44, 5, 816-822.

L'ECUYER, P. 1998. Random number generation. In Handbook of Simulation, J. Banks, Ed. John Wiley and Sons, Inc., New York, NY, 93-137.

LENDL, O. 1997. prng 2.2: A library for the generation of pseudorandom numbers. (Software). Institute of Mathematics, Paris-Lodron University Salzburg, Salzburg, Austria. Available at

LEVA, J. L. 1992. A fast normal random number generator. ACM Trans. Math. Softw. 18, 4 (Dec. 1992), 449-453.

LEYDOLD, J. 1998. A rejection technique for sampling from log-concave multivariate distributions. ACM Trans. Model. Comput. Simul. 8, 3, 254-280.

LEYDOLD, J. AND HORMANN, W. 1998. A sweep-plane algorithm for generating random tuples in simple polytopes. Math. Comput. 67, 224, 1617-1635.

LEYDOLD, J., LEEB, H., AND HORMANN, W. 2000. Higher dimensional properties of non-uniform pseudo-random variates. In Proceedings of the Conference on Monte Carlo and Quasi-Monte Carlo Methods 1998 (Claremont, CA, June 22-26, 1998), H. Niederreiter and J. Spanier, Eds. Springer-Verlag, Vienna, Austria, 341-355.

MARSAGLI, G. 1985. A current view of random number generators. In Proceedings of the 16th Symposium on the Interface (Atlanta, GA, Mar. 1984), L. Billard, Ed. Computer Science and Statistics Elsevier North-Holland, Inc., New York, NY, 3-10.

MARSAGLIA, G. AND TSANG, W. W. 1984. A fast, easily implemented method for sampling from decreasing or symmetric unimodal density functions. SIAM J. Sci. Stat. Comput. 5, 2 (June), 349-359.

MATSUMOTO, M. AND KURITA, Y. 1994. Twisted GFSR generators II. ACM Trans. Model. Comput. Simul. 4, 3 (July), 254-266.

PARK, S. K. AND MILLER, K. W. 1988. Random number generators: Good ones are hard to find. Commun. ACM 31, 10 (Oct. 1988), 1192-1201.

ROBERTSON, I. AND WALLS, L. A. 1980. Random number generation for the normal and gamma distributions using the ratio of uniforms method. Tech. Rep. AERE-R 10032. United Kingdom Atomic Energy Authority, Harwell, Oxfordshire, UK.

STADLOBER, E. 1989. Sampling from Poisson, binomial, and hypergeometric distributions: Ratio of uniforms as a simple and fast alternative. Rep. 303. Institut fur Mathematik, Forschungsgesellschaft Joanneum-Graz, Graz, Austria.

STADLOBER, E. 1990. The ratio of uniforms approach for generating discrete random variates. J. Comput. Appl. Math. 31, 1 (July), 181-189.

STEFANESCU, S. AND VADUVA, I. 1987. On computer generation of random vectors by transformation of uniformly distributed vectors. Computing 39, 2, 141-153.

WAKEFIELD, J. C., GELFAND, A. E., AND SMITH, A. F. M. 1991. Efficient generation of random variates via the ratio-of-uniforms method. Stat. Comput. 1, 129-133.

ZAMAN, A. 1996. Generation of random numbers from an arbitrary unimodal density by cutting corners. Available at

Received: July 1999; revised: November 1999; accepted: November 1999

This work was partially supported by the Austrian Science Foundation (FWF), project no. P12805-MAT.

Author's address: Department for Applied Statistics and Data Processing, University of Economics and Business Administration, Vienna, Augasse 2-6, A-1090, Austria; email:
COPYRIGHT 2000 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:ACM Transactions on Mathematical Software
Geographic Code:1USA
Date:Mar 1, 2000
Previous Article:Algorithm 800: Fortran 77 Subroutines for Computing the Eigenvalues of Hamiltonian Matrices I: The Square-Reduced Method.
Next Article:HPFBench: A High Performance Fortran Benchmark Suite.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters