Printer Friendly

Computational investigations of low-discrepancy sequences.


We have investigated the performance of certain common low-discrepancy sequences (i.e., sequences that attempt to place sample points nearly uniformly in space) for large numbers of dimensions (40 to 400) and have made some changes in an attempt to improve the performance of such sequences. These sequences are needed for multidimensional integration which has many applications. Our particular interest arises in accuracy estimation for simulation models that are used for the design of mineral processing plants.

Quasi Monte Carlo integration uses a low-discrepancy sequence to generate sample points and then as in Monte Carlo integration (which uses random sample points) the estimate of the integral is simply the average of the integrand values at the sample point.

Many of the relevant low-discrepancy sequences are linked to the van der Corput sequence introduced originally for dimension s = 1 and base b = 2 [van der Corput 1935a; 1935b]. The van der Corput discovery that the coefficients of the digit expansion of an increasing integer n in base b can be used to define a one-dimensional low-discrepancy sequence inspired Halton [1960] to use s van der Corput sequences with relatively prime bases for different dimensions and to create an s-dimensional low-discrepancy sequence. A different approach was used by Sobol [1967] who suggested a multidimensional (t,s)-sequence using base 2. The Sobol idea was further developed by Faure [1982], who suggested alternative multidimensional (t,s)-sequences with base b [greater than or equal to] s.

The Halton [1960] and the Faure [1982] constructions have proved to have very poor properties for large s (this will be demonstrated in Sections 2.1 and 3.1). While we are not aware of significant improvements to the Faure sequence, the performance of the Halton sequence has been greatly improved by permutations of the coefficients appearing in the radical inverse functions. Such permutations were suggested by Braaten and Weller [1979] who presented their results for s [less than or equal to] 16 .

In contrast to the Halton and the Faure sequences, the Sobol sequence appears to maintain good properties as the number of the dimensions increases. General construction principles for (t,s)-sequences were formulated by Niederreiter [1992], who, using these principles, found a class of (t,s)-sequences defined using coefficients of the Laurent series (the Niederreiter sequences). A subclass of the Niederreiter sequences that can be constructed using polynomial arithmetic over finite fields was recently developed by Tezuka [1993]

A peculiar feature of the literature on the quasi Monte Carlo integration is that while this method is introduced as superior to the common Monte Carlo integration, for N sample points the associated theory yields an integration error that is proportional to the discrepancy (a measure of nonuniformity of a point sequence - see Hickernell [1995], Niederreiter [1992], or Sloan and Joe [1994]):

[D.sub.N] = O([(logN).sup.s]/N). (1)

For moderate and large s (40-400) and any realistic values of N this bound is very much larger than the expected error of Monte Carlo integrations (O(1/[-square root of N])). Also, the constants obtained for the discrepancy bound (1) for different sequences give values that are unrealistic for larger values of s. A table of the coefficients for the discrepancy bound (1) for generators Halton, Faure, and Sobol was given by Faure [1982] and then reproduced by Braaten and Weller [1979]. The coefficients implied from dominant terms in the discrepancy bound (1) for s = 400 give values that are of cosmological size for the Halton sequence, of astronomical size for the Sobol sequence, and extremely small for the Faure sequence. Thus, these theoretical results for larger values of s do not give useful descriptions of the properties of the low-discrepancy sequences mentioned.

Experimental studies concerned with performance of the quasi Monte Carlo sequences reported in the literature [Braaten and Weller 1979; Bratley and Fox 1988; Bratley et al. 1992; Fox 1986; Tezuka 1993] are only for a restricted number of dimensions. They include computations of the star discrepancy (a measure of nonuniformity based on hypercubes that include the origin - see Hickernell [1995], Niederreiter [1992], or Sloan and Joe [1994]) and computations of the error of the quasi Monte Carlo integration. However, these computations were performed only for a restricted number of dimensions, using test functions with variances and averages that depended on s, and the error of integration included random scatter. These computational studies indicate that the quasi Monte Carlo outperforms the classical Monte Carlo only when the sample size is sufficiently large. This sample size increases with increasing number of dimensions, s. Hence, a possible advantage of the quasi Monte Carlo integration for samples N = [10.sup.3] to [10.sup.6] and the number of dimensions s = 40 to 400 is not obvious and has to determined by means of appropriate computational studies.


2.1 Halton Sequence

The Halton sequence in relatively prime bases [b.sub.1], [b.sub.2] . . . [b.sub.s] is defined as sequence

[x.sub.n] = ([[Phi].sub.[b.sub.1]](n), . . . [[Phi].sub.[b.sub.j]](n), . . . [[Phi].sub.[b.sub.s]](n)) (2)

where [[Phi].sub.[b.sub.j]](n) is the jth radical inverse function:

[Mathematical Expression Omitted]. (3)

This sum is finite with the coefficients [a.sub.i](j,n)(0 [less than or equal to] [a.sub.i](j,n) [less than] [b.sub.j]) coming from the digit expansion of the integer n in base [b.sub.j]:

[Mathematical Expression Omitted]. (4)

See for example Halton [1960], Krommer and Ueberhuber [1994], and Press et al. [1992]. To achieve the best (the minimum) discrepancy of Halton sequence (2) bases [b.sub.1], [b.sub.2], . . . [b.sub.s] are selected as the sequence of consecutive primes beginning from the smallest.

The star discrepancy (see Krommer and Ueberhuber [1994] and Niederreiter [1992]) is a measure of the nonuniformity of the points placed in hypercube [[0,1].sup.s]. The star discrepancy the Halton sequence with the pairwise relatively prime bases satisfies the relation

[Mathematical Expression Omitted]. (5)

See Krommer and Ueberhuber [1994]. The discrepancy bound of the Halton sequence (2) can be rewritten as

[Mathematical Expression Omitted] (6)

where the best-known constant of the leading term, [K.sub.1], is

[Mathematical Expression Omitted]. (7)

It is obvious that for s = 400 and prime bases the expression above goes beyond cosmological sizes.

A problem with the Halton sequence comes from the correlations between the radical inverse functions for different dimensions [Braaten and Weller 1979; Fox 1986]. The significance of these correlations becomes apparent for medium and larger s.

We implemented the Halton sequence as a routine in C. Given integers N and s this routine supplies the sth dimensional Nth Halton point. Using this routine we generated points of the Halton sequence up to s = 400 and N = [10.sup.6] and observed their uniformity in two-dimensional planes selected at random.

We have found that the unit square given by dimensions [j.sub.1] and [j.sub.2] is sampled reasonably uniformly if both [j.sub.1] and [j.sub.2] are small ([j.sub.1], [j.sub.2] [less than] 10). If [j.sub.2] is large, and [j.sub.1] is small, then the corresponding unit square is sampled uniformly in cycles of length [b.sub.[j.sub.2]]. If both [b.sub.[j.sub.2]] and [b.sub.[j.sub.1]] are relatively large but significantly different then the unit square given by dimensions [j.sub.1] and [j.sub.2]([j.sub.2] [less than] [j.sub.1]) is filled in cycles of length [b.sub.[j.sub.2]], but the points are clustered into [b.sub.[j.sub.2]]/[b.sub.[j.sub.1]] lines. If [b.sub.[j.sub.1]] and [b.sub.j.sub.2]] are large, and approximately the same, then the unit square is initially filled with points that are clustered into a line that is in the vicinity of the square diagonal. Increasing, we see that the new sample points fall into further lines. For example, for [j.sub.1] = 39 and [j.sub.2] = 40 ([ILLUSTRATION FOR FIGURE 1 OMITTED](a)), we have [b.sub.39] = 167 and [b.sub.40] = 173. Hence the first cycle will include 173 points clustered into a line with 167 points that is almost diagonal. The remaining six points are clustered as a short line placed in the left top corner of the unit square. If we consider the first 2000 points then the number of the lines is 2000/173 = 11.56. This means that the 12th line has only most of the longer section which is the bottom line in Figure 1(a).

However, even if the quality of the Halton sequence rapidly deteriorates with increasing s, for larger values of s(100-400) the theoretical limiting coefficient (7) together with the term [(logN).sup.s] in (1) cause a cosmological overestimation of the star discrepancy of this sequence for any realistic value of N.

2.2 Generalized Halton Sequence

A way to improve the uniformity of the Halton sequence is to break correlations between the inverse radical functions of different dimensions [Braaten and Weller 1979; Hellekalek 1984]. This is achieved by permutations of coefficients [a.sub.i] in each of the radical inverse functions [[Phi].sub.[b.sub.j]] given by Eq. (3). The generalized inverse radical function in base [b.sub.j] becomes

[[Phi].sub.[b.sub.j]](n) = [summation over i][Sigma]([a.sub.i](j, n))[b.sup.-i-1] (8)

where [Sigma] is the operator of permutations on [a.sub.i](j,n)'s; see Krommer and Ueberhuber [1994].

If the generalized inverse radical function (8) is used in sequence (2), then sequence (2) becomes the generalized Halton sequence. It is generally believed that any permutation of coefficients [a.sub.i](j,n) in (8) gives the discrepancy bound (1). The constant implied from the discrepancy bound for the generalized Halton sequence will certainly depend on the particular permutations selected; however, theoretical formulae showing this are not available. We are not aware of a proof that for s [greater than] 2 sequence (2) with [[Phi].sub.[b.sub.j]](n) defined as (8) has discrepancy bound O([(logN).sup.s]/N). Proof that the generalized Halton sequence has the discrepancy bound (1) for s = 2 was given by Faure [1986].

Braaten and Weller [1979] reordered the coefficients [a.sub.i](j,n) in a way that the discrepancy of the finite sequence {[Sigma]([a.sub.i](j,n))}, [a.sub.i] [less than] b, was minimal. To evaluate the effect of their suggestion, Braaten and Weller computed an estimate of the discrepancy of the original Halton sequence and the discrepancy of their sequence for N up to 103, and error of integration of the test function [Mathematical Expression Omitted] up to N = [10.sup.4] for s = 8,12, and 16. They found that the generalized Halton sequence suggested by them performs much better than the original Halton construction. Further studies regarding permutations [Sigma]([a.sub.i](j,n)) were reported by Hellekalek [1984].

The studies of Hallekalek [1984] and Braaten and Weller [1979] gave us a promising direction for improvement of the original Halton sequence for larger values of s.

2.3 Reverse-Radix Algorithm

There is no recipe which leads to optimal permutations of coefficients [a.sub.i](j,n). Rather it is a question of art to suggest an algorithm which is simple, effective, and which can be used for an unrestricted number of dimensions. The algorithm suggested by Braaten and Weller was presented for s up to 16. Extension to a larger number of dimensions would be possible; however, extending the algorithm of the Braaten and Weller to a larger number of dimensions requires significant computation.

We have developed new permutations [Sigma]([a.sub.i](j,n)) that break the cycles of the Halton sequence. The idea of this algorithm (named HaltonRR2) is to permute the values of [a.sub.i](j,n) by reversing the binary digits of integers, expressed using a fixed number of base-2 digits, and removing any values that are too large. This is shown in Table I where we demonstrate the generation of the coefficients [Sigma]([a.sub.i](j,n)) for dimensions 11 to 6 of a generalized Halton sequence. The first column of Table I shows coefficients [a.sub.i](11,n) (i.e., 0 to 30) for the 11th prime, 31. The second-to-last column is the reversed binary code using five digits of the numbers 0 to 31, and the last column is the decimal value of the binary code, which are the permutation of the coefficients [a.sub.i](11,n). For this particular case all the numbers are used except the last corresponding to 31; however in the general case if the reversed integer in the last column is too large the next value in that column is used.

The permutations for lower dimensions (s = 10 to 6) are also shown in Table I. These skip values in the last column that are larger or equal to the corresponding prime. It can be seen from the table that the permuted coefficients are well mixed with each new value tending to fill one of the larger gaps in preceding partial sequence (the method can be derived in this manner). The original reversed digit integer sequence (last column) contained cycles of length two that combine into cycles of length four then eight. The cycles of length two are reduced to a single element as values are dropped from the sequence and then may merge with an adjacent element to produce another cycle of length two.

The permutations for the sequence HaltonRR2 can be readily generated for any reasonable number of dimensions and hence the sequence points generated. We also generated points of the Braaten-Weller sequence using coefficients listed by Braaten and Weller [1979] for s = 16. Then the uniformity of the points of these two sequences was observed in two-dimensional planes selected at random for N up to [10.sup.6]. It was found that the patterns created by both of the sequences are similar and that they do not depend on the combination of the dimensions selected. Figures 1(c) and (d) show the distribution of the first 2000 points of the Braaten-Weller sequence and the first 2000 points of the HaltonRR2 sequence in the planes given by dimensions 15-16 and 39-40, respectively. A general feature of the two sequences is that for smaller samples (N [less than] 300) these sequences appear random, but then the regularity of the points generated increases displaying patterns as seen in Figures 1(c) and (d).

Further experiments with permutations of coefficients [a.sub.i](j,n) including random permutations did not find sequences that are more uniform than the HaltonRR2 sequence. This reverse-radix algorithm can be applied in many different ways, in the sense that selected groups of the digits in the last two columns of Table I can be reversed or interchanged (e.g., by adding and truncating, or logically exclusive-oring, the reverse-digit integers with a given value). However, the options tested showed no improvement.


While the introduction of [a.sub.i](j,n)'s permutations breaks correlations between inverse radical functions of different dimensions, it introduces some degree of apparent randomness. The sequences Braaten-Weller and HaltonRR2 are believed to have very similar properties. An advantage of the HaltonRR2 sequence is in its simple generation of the permutations and hence applicability for very large values of s. We also believe that the permutations utilized by the Braaten-Weller sequence and by the HaltonRR2 sequence are close to the best that can be achieved regarding the general formula (8).

2.4 Halton Sequence Leaped

Besides employing permutations in the right side of (8), another efficient method to eliminate the cycles of the Halton sequence is to use only every Lth Halton number subject to the condition that L is a prime different from all bases [b.sub.1], . . . [b.sub.s] used. This gives a formula for generation of the Halton points leaped - namely, n in (2) is replaced by mL, where L is the leap, and m = 1, 2, 3, . . . (in the actual construction of the Halton sequence leaped, one of the bases is omitted and is used as the leap value).

The high-order digit (base p) of the coordinate generated by a prime p of a Halton sequence leaped using a prime L (different from p) is of the form mL mod p. As m takes integer values starting from 1, mL mod p cycles through the values 0 to p - 1, giving the all the high-order digits (base p) for that coordinate in each cycle. Similarly mL mod [L.sup.k] cycles through the values 0 to [L.sup.k] - 1, giving each of the [L.sup.k] high-order k base-p digit combinations for that coordinate. Looked at in this manner, leaping can be regarded as another method of permuting the digits in a Halton sequence. However, in this case additional low-order digits are appended to the sequence, giving values slightly perturbed from their usual value. These perturbed values are not repeated in the sequence, and the different prime bases ensure independent behavior of each coordinate as in the case of other Halton-based sequences. The effect of leaping a Halton sequence is to choose a subset that progressively fills all parts of the unit cube in a similar manner to the original sequence.

Figure 1(b) shows the projection of the first 2000 points of the Halton sequence leaped into the plane given by dimensions 39 and 40. The local uniformity of the Halton sequence leaped projected into 2 dimensions is very good. Its disadvantage is that as the cycles of the original Halton sequence are destroyed by leaps, new secondary cycles emerge in the Halton sequence leaped. Fortunately, the length of the secondary cycles is relatively small (2-15) depending on the combination of the dimensions observed. The theoretical properties of these cycles has not at this stage been investigated, and it may be necessary to exclude primes that produce cycles that are longer than desired for a given application. The major difference between the HaltonRR2 sequence and the Halton sequence leaped is that while the sequence HaltonRR2 has no cycles and while the local uniformity of its points is of an apparently random character, the Halton sequence leaped achieves very good local uniformity, but it does contain short cycles.

A relatively simple way to find the optimal leap(s) is to compute the maximum error of integral of known functions over [[0,1].sup.s] for a range of leaps. It is obvious that the maximum error of the Monte Carlo integration with the Halton sequence leaped will depend on L, N, S, and the test function f. Good leap values were found experimentally by minimizing the integration error for a range of dimensions (s) and number of sample points (N), using the functions [F.sub.1], [F.sub.2], [F.sub.3], [F.sub.8], and [F.sub.9] given in the Appendix. The values of L, s, and N were varied in range 3-3000, 1-400, and 10-[10.sup.5], respectively. The best leaps values found were 31, 61, 149, 409, and 1949. For dimensions 1-100 it was difficult to choose the optimal leap from these values, because the error of integration depended on the function used and because the sum of the maximum errors of the integration was almost the same for the five leaps listed. However, in the range of s, 100-400, the best leap value found was clearly L = 409.

3. (t,s)-SEQUENCES

An alternative approach to the generation of low-discrepancy sequences is to start with points placed into certain equally sized volumes of the unit cube ((t,m,s)-nets). This is possible for sequences of a fixed length, and these then allow related sequences of indefinite length to be defined ((t,s)sequences).

3.1 (t,m,s)-Nets and (t,s)-Sequences

An s-dimensional (t,m,s)-net is an s-dimensional grid with majority of its points omitted. While a complete s-dimensional grid in base b contains [b.sup.s] points, a (t,m,s)-net contains only [b.sup.m] points, with m [less than] s. The selection of the remaining points is made with respect to the specific requirements imposed on their uniformity.

The specific requirements imposed on the uniformity of the (t,m,s)-nets are expressed by means of so-called elementary interval E [element of] [[0,1].sup.s], which is defined as

[Mathematical Expression Omitted] (9)

where [a.sub.i] and [d.sub.i] [greater than] 0 are integers, satisfying condition 0 [less than or equal to] [a.sub.i] [less than] [b.sup.di] for 1 [less than or equal to] i [less than or equal to] s [Krommer and Ueberhuber 1994; Press et al. 1992]. A (t,m,s)-net in base b is defined as a point set of [b.sup.m] points in [[0,1].sup.s] such that, for every elementary interval E in base b with hypervolume, [b.sup.t-m] contains [b.sup.t] points.

A (t,s)-sequence with base b is an infinite sequence in which subsequences of length [b.sup.m] are (t,m,s)-nets. Multidimensional (t,s)-sequences were originally introduced by Sobol [1967] for b = 2. Sobol also suggested an algorithm of generation of a (0,s)-sequence with s [greater than] 1 and b = 2. While the theoretical procedure that leads to his design is complex, the resulting algorithm for generation of the sequence is quite straightforward [Antonov and Saleev 1979; Bratley and Fox 1988; Niederreiter 1992]. A computer implementation of the Sobol sequence was given by Antonov and Saleev [1979], who, to improve the efficiency of the code, modified this sequence. Their version of the Sobol sequence was later implemented again by Bratley and Fox [1988] and Press et al. [1992].

We used the computer implementation of Bratley and Fox (ACM Algorithm 659) to generate the points of the Sobol sequence for s = 40. The uniformity of the points generated was then observed in two-dimensional planes selected at random. Figure 2(a) shows a typical pattern created by the points of the Sobol sequence. Note that the size of the patterns formed by the Sobol sequence is larger than that of the Braaten-Weller sequence and the HaltonRR2 sequence.

A construction of a (t,s)-sequence different from that of Sobol was suggested by Faure [1986]. The Faure sequence was implemented and experimentally studied by Fox [1986] for s [less than or equal to] 40 with base b = 41 (ACM Algorithm 647). Fox found that for larger values of s, the properties of the Faure sequence are very bad for the first [b.sup.4] - 2 points. Accordingly, the Fox implementation starts from n = [b.sup.4] - 2. We have used the Fox implementation to generate the points of the Faure sequence for s = 40 and n up to [10.sup.4]. The typical pattern of the Faure sequence for any combination of the two dimensions and s = 40 is as that in Figure 2(d).

A serious problem of the Faure sequence is that, as N increases the points generated (when projected into two dimensions) do not consist of cycles that cover the unit square uniformly. Instead, they cover part of the square in diagonally oriented strips and then fall into the strips that have been sampled previously, nearly on top of the points already there. Figure 2(c) shows projections of the first 300 points of the Faure sequence into the plane given by dimensions 39 and 40.

The star discrepancy of the first N points of a (t,s)-sequence in base b satisfies inequality

[Mathematical Expression Omitted] (10)

where the constant implied in the second term depends only on b and s [Krommer and Ueberhuber 1994]. The constant of the leading first term is

C(s,b) = 1/s[(b-1/2logb).sup.s] (11)

if s = 2, or b = 2 and s = 3,4. In all other cases we have

[Mathematical Expression Omitted] (12)

[Niederreiter 1992]. For s = 400 and the Sobol sequence the value of the constant above is very large, which does not correspond to the real properties of this sequence. On the other hand, Faure [1982] for his sequence gives constants that rapidly decrease with s.

3.2 (t ,s)-Sequences Leaped

A short investigation of the effect of leaping on the uniformity of the generated points was also undertaken for the Sobol and the Faure sequences for samples N [less than] [10.sup.4] and s = 40 using the computer implementation of Bratley and Fox [1988]. Experimenting with the Sobol sequence resulted in finding that the Sobol sequence leaped can be created with leaps satisfying condition L = [2.sup.m] where m is a positive integer. The first coordinate of the Antonov-Saleev version of the Sobol sequence leaped generated in this way has to be either omitted, or scaled as [Mathematical Expression Omitted], and the second coordinate of the Sobol sequence leaped is either omitted or scaled as [Mathematical Expression Omitted]. Leaps different from the values of [b.sup.m] lead to sequences that for smaller N appear to be random. For larger values of N, observations of the generated points indicated that for leaps L [not equal to] [2.sup.m](m = 1, 2, ...) the Sobol sequence leaped does not fill the available volume uniformly.

Figure 2(b) shows projections of the first 2000 points of the Sobol sequence leaped (L = 4). Figure 2(a) represents a typical pattern of the original Sobol sequence. Comparison of Figures 2(a) and (b) suggests that for the combination of the dimensions shown the pattern of the Sobol sequence leaped is finer than that of the original Sobol sequence. This happens for the majority of combinations of two dimensions. Nevertheless, it appears that the uniformity of the Sobol sequence leaped is similar to the Antonov-Saleev version of the Sobol sequence; however, a detailed investigation has not been completed.

Properties of the Faure sequence leaped were studied experimentally using the Fox implementation for s = 40 with base b = 41 and leaps L = 2 to 31. Observation of the projections of the points of the Faure sequence leaped in two-dimensional planes selected at random demonstrated a considerable improvement on the original Faure sequence. Any value of the leap (except L = b = 41) improved the Faure sequence. The best results were achieved for leaps L = 5, 6, 12 to 17, and then around 30.

The effect of leaping on the Faure sequence is demonstrated in Figure 2(d), which can be compared with the original in Figure 2(c). It is suspected that a Faure sequence leaped with uniformity as good as the Halton sequence leaped with L = 409 could be developed. However, these investigations have not been undertaken.


The error of multidimensional integration can be examined in several different ways. The following sections discuss several theoretical and numerical aspects of obtaining useful error estimates.

4.1 Error Bounds for Quasi Monte Carlo Integration

The error bound of the quasi Monte Carlo integration is given by the Koksma-Hlawka inequality

[Mathematical Expression Omitted] (13)

where V(f) is the variation of f in the sense of Hardy and Krause, and [Mathematical Expression Omitted] is the star discrepancy of N sample points. Details of this inequality are given in Niederreiter [1992] and Krommer and Ueberhuber [1994].

Knowing the variation of the function, we can say that the error bound (13) is based on the star discrepancy of the sequence used for the quasi Monte Carlo integration. Unfortunately, the theoretical discrepancy bound of the existing sequences is not useable for moderate and large values of s. The other option, a numerical evaluation of the star discrepancy of a sequence for large s, requires an excessive computational effort, and even reasonable numeric estimates of such discrepancies are very difficult to obtain.

4.2 An Alternative View of Error Bounds

An alternative estimate of the accuracy of quasi Monte Carlo integration can be obtained by considering the volumes between the sample points. For N points in an s-dimensional unit cube, first construct the volumes within the cube around each sample point that are closer to the point than to any other point (this gives the multidimensional Voronoi diagram). These N volumes have an average volume of 1/N, and hence some points in at least one of the volumes must be at least distance [Mathematical Expression Omitted] from any sample point where [R.sub.s](r) [equivalent to] [r.sup.s][(2[Pi]e/s).sup.s/2]/[([Pi]s).sup.1/2] is the volume of an s-dimensional hypersphere. For a sequence of points near uniformly distributed over the hypercube we can expect order N points of approximately this distance from any sample points. Further as s + 1 sample points are needed to define a point equally distant from the sample points we expect only s + 1 or slightly more sample points to be near to this distance from the these order N points.

The difference between a function that integrates exactly using the sample points and the integrand is a function of the distance r from the sample points, typically [r.sup.c] with c about 1 or 2, depending on assumptions made (c = 2 can be obtained for uniform points, while c = 1 may apply to points not on a uniform grid). Hence the integration error arising from a region with center distant [Mathematical Expression Omitted] from any sample point, as there are only a few sample points on the boundary of the region, is of the order

[Mathematical Expression Omitted] (14)

and we have order of N regions with an error of this size. The last two terms in this expression vary little and hence will be replaced by a constant.

If it is now assumed that the errors are random variables, which is reasonable if the sample points are not exactly equally spaced and/or if the function has both convex and concave regions, then the rule for combining variances can be used to estimate the total error of integration arising from the order-N regions giving

[s.sup.c/2]/[N.sup.1/2_c/s] (15)

as the order of the error of integration. Note that if the errors cannot be considered random, the total error can be larger, as the case of a convex function and equal spacing, or smaller as in the case of periodic functions where the errors tend to cancel. An approximation of an integral as a progressive unweighted sum is also limited in accuracy by 1/N times the maximum deviation of the integrand from the integral value, or, for a typical amount of variation 1/N times the standard deviation of the integrand. For s = 1 this second limit will apply unless the points satisfy some special condition such as being equally spaced. It should be noted that the above derivation is independent of axis rotation whereas star discrepancy is specific to the particular axis directions used.

A heuristic argument given in Press et al. [1992] implies that for s = 3 the number of points close to discontinuities of the integrand is proportional to [N.sup.2/3]. This can be extended to s dimensions to give [N.sup.(s-1)/s] points. These points, being close to the discontinuities of the integrand, can be considered random. Assuming equal errors for each of these points, we see that the variance of the integral can be estimated and that its square root gives O([N.sup.-(s+1)/(2s)] as the estimate of the integral accuracy. This, which is for a discontinuous integrand, corresponds to the previous result with c = 1/2.

A recent paper by Katscher et al. [1996] also derives error estimates of the form O(1/[N.sup.1/2+e/s]) for the integration of convex functions.

The above expressions are in adequate agreement with many experimental results, provided that the integrand does not simplify into additive components of lower dimensions in which case the accuracy of each component needs to be considered. The tendency for the error in larger dimensions to have a larger multiplier is thought partly due to the larger gradients that typically occur in the higher-dimension integrands that do not simplify. The value of the power coefficent c is difficult to obtain accurately from numerical investigations, but a value near 1/2 has been estimated from the simulations described in this article.

4.3 Decomposition of Integrands

The previous section indicates that the accuracy of an integration depends on the dimension of the components of the integrand. An arbitrary integrand can be decomposed into a sum of components of lower dimensions in the following form [Hickernell 1995; Owen 1992; 1995; Whiten and Kocis 1995]:

[Mathematical Expression Omitted] (16)

where the subscripts of f are all combinations having a strictly increasing order, and the integral of each component function with respect to the variables in that component and with respect to any subset of these variables is zero. The component values may be found in principle by integrating both sides of (16) with respect to subsets of the variables and solving the resulting set of equations. It is easily seen that the components of the sequence are unique. By first considering a two-dimensional function it can be seen that the n-dimensional component in (16) must have at least [2.sup.n] regions with alternating signs.

As the differently dimensioned components of this decomposition are expected to behave differently in quasi Monte Carlo integration they need to be investigated separately to obtain an understanding of the properties of this type of integration. For components with a larger number variables, the number of different regions in the function can become larger than the number of samples used in the integration. This clearly justifies the randomness assumption in the previous section.

As the composition of an actual integrand is not known, a good sequence for quasi Monte Carlo integration needs to be near optimal for each of the components in this decomposition.

4.4 Error Estimates from Integration of Test Functions

The alternative to trying to get a theoretical error bound is to determine the error of the quasi Monte Carlo integration by integrating functions whose integral over [[0,1].sup.s] is known. However, this approach to determine the error bound for the quasi Monte Carlo integration using a given sequence is not without problems. First, it is not immediately obvious how the error to be reported should be computed. Second, it is not clear how test integrands should be chosen so that comparable error values are obtained for different functions and different dimensions.

The relative error may be of interest when evaluating a particular integral, but as adding a constant to the integrand alters the relative error, it is not appropriate for this type of study. The effect of a constant, which always integrates exactly using Monte Carlo integration, on the integrand can be removed by choosing the constant so that the integral is zero. The actual error value must then be used to evaluate the result of the integration. The terms in the decomposition (16) all have zero integrals, also indicating that test functions with a zero integral are appropriate.

Having decided on integrals with a zero value we see that the scale of the integrand needs to be set. Sections 4.1 and 4.2 both indicate that the amount of variation in the integrand has, as might well be expected, a dominant effect on the size of the integration error. In the case of Monte Carlo integration the expected variance of the integral is the variance of the integrand divided by the number of sample points N. Hence if the integrands are normalized to have a variance of one then the results of test integrations can be easily compared with the expected result of Monte Carlo integration. For a large number of dimensions the error in quasi Monte Carlo integration has been found to be similar to that of Monte Carlo integration, and hence this normalization is also convenient for examining the results of quasi Monte Carlo integration.

The error of an integral at a particular value of N varies in an apparently random manner within the accuracy range, as N changes or as the integrand is changed, and this can undoubtedly be misleading if the results of single integrations are presented. It is necessary to obtain a more consistent value that indicates either the expected size of the error or a bound on the error. For this work, we have chosen to use the maximum error within a range of N values. This tends to overestimate the error expected for a particular integral but is regarded as a safe option. We also examined the root mean squared error and found it to be generally proportional to the maximum error.

As we expect the error from numerical integration to be of the form (15), plotting the maximum error for geometric intervals of N on log-log scales gives a useful indication of the error properties and allows easy comparison of the various sequences. It would be desirable to average the results from a large number of similar integrands; however, computational limitations and the difficulty of determining which integrands are similar have stopped us from taking this approach.

The choice of test functions for integration is more difficult. We have seen that different results are expected for functions of different essential dimensionality. Mixing such functions will result in one function dominating the error. Hence it is better to separate the components and examine their properties separately. As a general integrand is the sum of such components, the types of behavior that might be seen are easily determined from the behavior of the components.

Normalization of the integrands, the separation of components of the integrand with different dimensionality, and the averaging of results are needed to provide a meaningful picture of the results of test integrations. Computational investigations reported in the literature [Bratley and Fox 1988; Braaten and Weller 1979; Bratley et al. 1992; Fox 1986] include integrands with both their average and variance significantly dependent on s, and they use different definitions of the error. This makes some results difficult to interpret and compare.

The tests described below use integrands that do not decompose to sums of components of lower dimensions and integrands that consist solely of equal components of a single lower dimension. Within each class of integrands, functions with a range of properties have been selected.


For a range of given values of s, the error of sample quasi Monte Carlo integrations was computed for values of N in the range 2 to [10.sup.5]. Twenty values of [Mathematical Expression Omitted] in this range were selected so that they increased approximately exponentially. The maximum error for [Mathematical Expression Omitted] was evaluated as the maximum of all absolute errors in the range [Mathematical Expression Omitted] and was printed out together with into [Mathematical Expression Omitted] the output file of the program. For N less than about 30 the results tend to be random. For larger samples the maximum error of the quasi Monte Carlo integration should depend on the integrand only marginally if that integrand is slowly varying and smooth.

5.1 Test Functions

A list of the test functions used for the computation of the maximum error of the integration each having zero mean and unit variance is in the [TABULAR DATA FOR TABLE II OMITTED] Appendix. Functions [F.sub.1] to [F.sub.3] are constructed as sums of single-variable functions [f.sub.j]. The single-variable functions used in construction of [F.sub.1] are linear (as a base for comparison); those in [F.sub.2] are convex; and those in [F.sub.3] are concave. Note that each of the functions [F.sub.1] to [F.sub.3] is smooth with only one maximum and one minimum.

While functions [F.sub.1] to [F.sub.3] are well behaved, and thus satisfy conditions for proper quasi Monte Carlo integration, they are not truly multidimensional in that they are a sum of lower-dimensional functions. Functions [F.sub.4] to [F.sub.7] are constructed as products, and thus they can certainly be regarded as genuine multivariate functions. The price for this status is that for these function the number of the function extremes increases as [2.sup.s]. If, for example s = 40 then [2.sup.2] = 1.1 x [10.sup.12], and the N points will sample only a negligible part of the 1.1 x [10.sup.12] separate sections in which these functions are alternatively negative and positive. Hence the conditions for proper quasi Monte Carlo integration are not satisfied, and the rate of the error convergence cannot be expected to match that obtained when functions [F.sub.1] to [F.sub.3] are integrated.

The functions [F.sub.4] to [F.sub.7] have been chosen to cover the range from discontinuous, but limited to values of -1 or + 1, to very smooth (i.e., linear components) but containing extreme maxima and minima. The conditions of the quasi Monte Carlo integration will worsen with an increase of the absolute values of the extremes of the product functions. Table II shows that the absolute value of the extremes of functions F.sub.5] to [F.sub.7] rapidly increases with s and with the subscript of the function. The purpose of the integration of functions [F.sub.4] to [F.sub.7] was to observe the behavior of the integration error as the number of dimensions increases and as the smoothness and corresponding extremities of the functions change.

Functions [F.sub.8] and [F.sub.9] have only two-dimensional components being constructed as sums of products of all combinations of two one-variate functions. This allows the simultaneous testing and averaging of all the possible two-dimensional functions available within the chosen number of dimensions s. The function [F.sub.8] was chosen to correspond to [F.sub.5], having flat regions and discontinuities, while [F.sub.9] is smooth and similar to [F.sub.6] in being based on one-dimensional cubic functions. They are functions with a large number of different regions; however, unlike functions [F.sub.5] to [F.sub.7], their extremes do not increase rapidly with the number of dimensions.

5.2 Results of Integration for Sum Functions

Figures 3 and 4 show the error of integration of function [F.sub.2] as it depends on the number of the sample points, N, and the number of the dimensions, s. Figure 3(a) demonstrates the conditions of integration of [F.sub.2] for s = 1. As it can be observed from this figure, sequences Halton, HaltonRR2, and Braaten-Weller give the same error. This is because for s = 1 these sequences are the same. The maximum error for the Sobol and the Faure sequences is marginally better, and the best result is achieved with the Halton sequence leaped. Figure 3(b) is for s = 2. In this case, the differences between the maximum error due to different sequences is marginally larger.

The maximum error of integration due to the Braaten-Weller sequence for s = 2, 5, and 16 was found to be almost the same as that for the HaltonRR2 sequence. Hence the error lines of these two sequences are the same.

In Figure 3(c) (s = 5) the spread between the error lines is again larger, but their average slope remains close to the value of -1 except for the initial samples where the maximum error is underestimated. Figure 3(d) (s = 16) shows that the best performance is achieved with the Halton sequence leaped and the Sobol sequence, which give approximately the same maximum error. Performance of the latter two sequences is closely followed by the HaltonRR2 and Braaten-Weller sequences (which give the same maximum error) and the Faure sequence, which performs marginally worse. A very poor performance is achieved with the Halton sequence, represented by the error line at the top.

Figure 4(a) of shows the maximum error of integration of function [F.sub.2] for s = 40. The differences between the sequences are now considerably larger. The sequences studied can be ordered according to their performance. Bad performance of the Halton sequence results from unsatisfactory uniformity of its points, caused by excessively long cycles; see Figure l(a). Better performance is achieved with the Faure sequence, which in the case shown has base b = 41 (the largest base of the Halton sequence used for s = 40 is [b.sub.40] = 173). The best sequence appears to be the Halton sequence leaped closely followed by the Sobol sequence and the Halton RR2 sequence.

The maximum error of integration of the three sequences that were implemented for an almost unrestricted number of dimensions is shown in Figures 4(b)-(d) for s = 100, 200, and 400. The properties of the maximum error of integration of [F.sub.2] seen in these cases show that while the performance of the Halton sequence is clearly unsatisfactory, the HaltonRR2 sequence and the Halton sequence leaped perform well.

The maximum error of integration of the functions [F.sub.1] and [F.sub.3] is remarkably similar to that for [F.sub.2]. Thus, it is fair to hypothesize that all functions that are smooth, and with only 1 or 2 extremes (having reasonable values), will give results that are similar to those in Figures 3 and 4.

To look at the quality of the sequences as it depends on the number of dimensions, s, the maximum error of integration for functions [F.sub.1], [F.sub.2], and [F.sub.3] was averaged for N = [10.sup.5] samples. The maximum error received in this way was then plotted against the number of dimensions, s in Figure 5. Figure 5 suggests an ordering of the investigated sequences according to their quality. The first graph in Figure 5 demonstrates that the quality of the Halton sequence deteriorates with s quite significantly. The other sequences show only a limited decrease in quality as the number of dimensions increases.

5.3 Results of Integration for Product Functions

The average slope of the log of the maximum error of integration of [F.sub.4] versus log(N) was found to be in a good agreement with the value of -(s + 1/(2s) except for the Sobol sequence. The average slope of the maximum error of integration of [F.sub.4] with the Sobol sequence is approximately -0.9. This is due to the fact that the Sobol points and function [F.sub.4] exhibit symmetry with respect to the centre of the hypercube. This advantage of the Sobol sequence disappears if the integrand does not have this property. It should be clear that such agreement in the symmetry of the sequence and integrand is not typical and should be avoided in testing of sequences. If there is a known symmetry in the integrand it should be used to reduce the limits of the integration.

As the extremes of the integrand increase (see Table II) the starting points of the error lines exhibit a considerable scatter; see Figures 6(a) and (b). This scatter is of random character. It does not indicate the quality of the sequences. The initial average slope of the error lines for [F.sub.5] to [F.sub.7] for a sufficiently large s can be between -1 and 0 depending on whether one of the first points has, or has not, hit one of the function's sharp peaks. As N increases, the error lines with different starting points tend to approach each other and often appear to meet after about

[N.sub.critical] = K Vol([0, 1].sup.s])/Vol(Part of [[0,1].sup.s] with large values ofF) (17)

samples, where K = 15 to 30. The fact that for a sufficiently large s the value of functions [F.sub.5] to [F.sub.7] is significant only in relatively small areas of the hypercube implies that the samples with almost zero value of the function are frequent, but those with a large value of the function are rare. Since the areas with significant values of the integrand are unrelated to the sampling, the occurrence of the significant samples is basically a random process. When such a significant sample occurs it causes an abrupt change in the value of (1/N)[summation of] [F.sub.k]([x.sub.i]) where i = 1 to N, and the error convergence is then worse than 1/[-square root of N] ([F.sub.k] denotes one of the product functions [F.sub.5]-[F.sub.7]). The value of (17) increases sharply with s because the functions become nearly zero over most of the hypercube as the function extremes increase; see Table II.

5.4 Results of Integration for the Sum of Product Functions

The average slope of the error lines obtained from integration of functions [F.sub.8] and [F.sub.9] gives a reasonable agreement with formula -(s + 1)/(2s). For s [greater than] 10 the latter formula implies the average slope of approximately -0.5, which means that integration with low-discrepancy sequences has reduced to the common Monte Carlo method. This can be explained by realizing that, for larger s, functions [F.sub.8] and [F.sub.9] have many separate regions, and only few of these areas are sampled with any reasonable N. Hence the error convergence proportional to 1/[-square root of N] occurs. For small values of s the maximum error of integration of [F.sub.8] and [F.sub.9] does not depend on the sequence used. For larger s(40-400) and the smaller samples (N [less than] [10.sup.4]) the differences between the sequences become apparent, and the best result is for the Halton sequence leaped; see Figure 7. This implies that initially the points of this sequence are uniform enough to guarantee that both the areas where the integrand is positive and the areas where the integrand is negative are sampled with approximately equal frequency. Considering a larger N, performance of both the Halton sequence leaped and the HaltonRR2 sequence becomes similar; see Figures 7(b) and (d).

These functions in spite of being composed of two-dimensional components have behaved more like what might be expected for functions with higher-dimensional components.


6.1 Uniformity of Low-Discrepancy Sequences

While observation of the distribution of the sample points projected onto planes given by a combination of two dimensions is a subjective method, it clearly demonstrates some basic features of the low-discrepancy sequences.

The Halton sequence gives uniform distributions for lower dimensions (1-10). As the number of dimensions increases, the quality of the Halton sequence rapidly decreases because the two-dimensional planes within the hypercube are sampled in cycles with increasing periods. For dimensions larger than about 30 the sample points of generator Halton are ordered into lines.

The problems of the Halton sequence can be cured by the permutations [Sigma]([a.sub.i](j,n)) introduced in Eq. (8). Another option is to use (t,s)-sequences. The first approach is represented by the Braaten-Weller sequence and them HaltonRR2 sequence. The second approach is realized in the Sobol, Faure, Niederreiter, and Tezuka sequences.

Diagrams in Figures 1(c) and (d) demonstrate that the generalized Halton sequences result in removal of the cycles of the original Halton sequence, but a certain degree of local nonuniformity is introduced. In its properties (uniformity and the error of integration) the HaltonRR2 sequence is quite similar to the Braaten-Weller sequence. The advantage of the HaltonRR2 sequence is in its algorithm, which is readily useable for an almost unrestricted number of dimensions.

The Sobol sequence preserves its uniformity as s increases; see Figure 2(a). The Faure sequence is an example of a less successful (t,s)-sequence. This sequence gives distribution quite different from other generators. While it achieves a high degree of local uniformity, the unit square projections of the hypercube are sampled in strips, and the new points fall into the vicinity of those generated previously.

An efficient method to improve performance of the Halton sequence is to use only every Lth point of that sequence (L [not equal to] [b.sub.1], ... [b.sub.s]). Suitable leaps for s = 1 to 100 are 31, 61, 149, 409, and 1949. For s = 100 to 400 a good leap value is 409. Local uniformity of the Halton sequence leaped is very good; however, small cycles can be observed.

Leaping of the Sobol sequence can be done with leaps L = [2.sup.m], (m = 1, 2, ...). Observation of the points of the Sobol sequence leaped projected into two-dimensional planes suggests that this sequence may be marginally better than the Sobol-Antonov-Saleev sequence. This needs further evaluation.

Leaping of the Faure sequence shows a considerable improvement. Almost any value of the leap (except the base) shows improvement. It is possible that optimal leaps for the Faure sequence will give a sequence with very good local uniformity and relatively small cycles.

6.2 Error of Integration

The low-discrepancy sequences studied were also evaluated by computing the maximum error of integration of nine test functions each having a zero mean and a unit variance. The test functions included sum functions, product functions, and a sum of product functions. Each of these classes included functions with different properties, and the apparent rate of numerical convergence was determined.

The maximum error of the quasi Monte Carlo integration of the sum functions indicate that the Halton sequence leaped (with L = 409) performs as well as the Sobol sequence, and the HaltonRR2 sequence is of the same quality as the Braaten-Weller sequence, but slightly less accurate than the Halton sequence leaped and the Sobol sequence. An overall evaluation of integration of functions [F.sub.1] to [F.sub.3] favored the Halton sequence leaped. Successful quasi Monte Carlo integration is subject to the properties of the integrand, which has to be smooth, slowly varying, and with only few maxima and minima. The sum functions [F.sub.1] to [F.sub.3] satisfy these conditions. The maximum error of the quasi Monte Carlo integration of [F.sub.1] to [F.sub.3] with good sequences and a realistic number of points is a bit worse than 1/N, but certainly not [(logN).sup.s]/N for a large number of dimensions.

Observation of the maximum error of the quasi Monte Carlo integration of "oscillatory" functions with fiat and moderate extremes ([F.sub.4], [F.sub.5] for s [less than] 16, and [F.sub.8], [F.sub.9]) shows that the error lines exhibit scatter and that the average slope of error lines (on log scales) is close to -(s + 1)/(2s). For moderate and large s the number of different regions of functions [F.sub.4] to [F.sub.9] is large. For these the quasi Monte Carlo integration degenerates into a random process with error convergence 1/[-square root of N].

If the integrand has many spikes, which forces the remaining regions to be close to zero (product functions [F.sub.5]-[F.sub.7] for sufficiently large s), then the initial value of the error of the integration is between maximum of the absolute value of the integrand and zero. The initial average slope is between -1 and zero. Then after reaching sufficient samples the average slope becomes -0.5, or worse. This depends on how narrow and sharp the spikes are. It is apparent that most of the N sample points are irrelevant and that the spikes are sampled only with a small fraction of the N points. This results in a very poor performance.

6.3 Final Comments

For the numerical evaluation of a particular high-dimension integral the above results indicate that if the integrand does not decompose into dominant simple components (such as the sum of one-dimensional functions) an apparent convergence rate of only 1/[-square root of N] can be expected. Also, if a small part of the integral is a component that does not reduce to a sum of low-dimension components, this small part, having a lower rate of convergence, will eventually dominate the error of the integral.

A better knowledge of the properties of the integrand may provide an alternative to direct quasi Monte Carlo integration. Any symmetry can be used to reduce the integration limits. Transformation of the integrand and the use of importance sampling may be possible [Press et al. 1992]. If the integrand can be approximated by an integrable analytic function, the hopefully small difference can then be evaluated by quasi Monte Carlo integration. An approximation to the integrand may be developed numerically for instance by using spline functions [Friedman et al. 1983].

Alternatively it may be possible to divide the region of integration into regions of simpler behavior either from a prior knowledge of the integrand or from an adaptive investigation of the integrand [Friedman and Wright 1981; Press et al. 1992; Sasaki 1978].

For cases where the integrand is periodic with not too many dimensions and where the values of N need not take arbitrary values, the use of complete cycles in a (t,s)-sequence may be advantageous [Sloan and Joe 1994].

New sequences that have not been tested in this work are becoming available, and their properties need to be evaluated. Of particular interest is whether performance on the sum of product-type functions can be improved.



[F.sub.1] = [-square root of 12/s] ([summation of] [x.sub.j] where j = 1 to s - s/2)

[Mathematical Expression Omitted]

[Mathematical Expression Omitted]

[Mathematical Expression Omitted] where [f.sub.j] = -1 for [x.sub.j] [less than] 0.5 otherwise [f.sub.j] = 1

[Mathematical Expression Omitted]

[Mathematical Expression Omitted]

[Mathematical Expression Omitted]

[Mathematical Expression Omitted]


[f.sub.j] = i for [x.sub.j] [less than] 1/6 or [x.sub.j] [greater than] 4/6

[f.sub.j] = 0 for [x.sub.j] = 1/6 or [x.sub.j] = 4/6

[f.sub.j] = - 1 for [x.sub.j] [greater than] 1/6 and [x.sub.j] [less than] 4/6

[Mathematical Expression Omitted]


[f.sub.j] = 27.20917094 [x.sup.3] - 36.19250850 [x.sup.2] + 8.983337562 x + 0.7702079855


The authors would like to acknowledge the assistance of the editor Eric Grosse, Tim Hesterberg, and an anonymous referee. Funding in part by the Australian Research Council is gratefully acknowledged.


ANTONOV, I. A. AND SALEEV, V.M. 1979. An economic method of computing LPt-sequences. USSR Comput. Math. Math. Phys. 19, 252-256.

BRAATEN, E. AND WELLER, W. 1979. An improved low-discrepancy sequence for multi-dimensional Quasi-Monte Carlo integration. J. Comput. Phys. 33, 249-258.

BRATLEY, P. AND FOX, B. L. 1988. Algorithm 659: Implementing Sobol's quasirandom sequence generator. ACM Trans. Math. Softw. 14, 1, 88-100.

BRATLEY, P., Fox, B. L., AND NIEDERREITER, H. 1992. Implementation and tests of low-discrepancy sequences. ACM Trans. Model. Comput. Simul. 2, 3, 195-213.

FAURE, H. 1982. Discrepance de suites associees a un systeme de numeration (en dimension s). Acta Arith. 41, 337-351.

FAURE, H. 1986. On the star discrepancy of generalised Hammersley sequences in two dimensions. Monatsh. Math. 101, 291-300.

FOX, B. L. 1986. Algorithm 647: Implementation and relative efficiency of quasirandom sequence generators. ACM Trans. Math. Softw. 12, 4 (Dec.), 362-376.

FRIEDMAN, J. H., GROSSE, E., AND STUETZLE, W. 1983. Multidimensional additive spline approximation. SIAM J. Sci. Stat. Comput. 4, 2, 291-301.

FRIEDMAN, J. H. AND WRIGHT, M.H. 1981. A nested partitioning procedure for numerical multiple integration. ACM Trans. Math. Softw. 7, 1, 76-92.

HALTON, J. H. 1960. On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numer. Math. 2, 84-90.

HELLEKALEK, P. 1984. Regularities in the distribution of special sequences. J. Number Theory 18, 41-55.

HICKERNELL, F.J. 1995. A comparison of random and quasirandom points for multidimensional quadrature. In Monte Carlo and Quasi Monte Carlo Methods in Scientific Computing. Springer, Berlin, 213-227.

KATSCHER, C., NOVAK, E., AND PETRAS, K. 1996. Quadrature formulas for multivariate convex functions. J. Complexity 12, 5-16.

KROMMER, A. R. AND UEBERHUBER, C.W. 1994. Numerical Integration on Advanced Computer Systems. Springer-Verlag, Berlin, Germany.

NIEDERREITER, H. 1992. Random Number Generation and Quasi-Monte Carlo Methods. CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 63. SIAM, Philadelphia, Pa.

OWEN, A.B. 1992. A central limiting theorem for latin hypercube sampling. J. Royal Stat. Soc. B. 54, 2, 541-551.

OWEN, A. B. 1995. Randomly permuted (t,m,s)-nets and (t,s)-sequences. In Monte Carlo and Quasi Monte Carlo Methods in Scientific Computing. Springer-Verlag, Berlin, Germany, 299-317.

PRESS, W. H., TEUKOLSKY, S. A., VETTERLING, W. T., AND FLANNERY, B.P. 1992. Numerical Recipes in C. 2nd ed. The Art of Scientific Computing. Cambridge University Press, New York, N.Y.

SASAKI, T. 1978. Multidimensional Monte Carlo integration based on factorized approximation functions. SIAM J. Numer. Anal. 15, 5, 938-952.

SLOAN, I. H. AND JOE, S. 1994. Lattice Methods for Multiple Integration. Clarendon Press, New York, N.Y.

SOBOL, I. M. 1967. The distribution of points in a cube and approximate evaluation of integrals. Zh. Vychisl. Mat. Mat. Fiz. 7, 784-802.

TEZUKA, S. 1993. Polynomial arithmetic analogue of Halton sequences. ACM Trans. Model. Comput. Simul. 3, 2 (Apr.), 99-107.

VAN DER CORPUT, J. G. 1935. Verteilungsfunctionen I. Nederl. Akad. Wetensch. Proc. 38, 813-820.

VAN DER CORPUT, J.G. 1935. Verteilungsfunctionen II. Nederl. Akad. Wetensch. Proc. 38, 1058-1066.

WHITEN, W. J. AND KOCIS, L. 1995. Regarding test functions for multi-dimensional integration. SIGNUM Newsl. 30, 1 (Jan.), 2-7.
COPYRIGHT 1997 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kocis, Ladislav; Whiten, William J.
Publication:ACM Transactions on Mathematical Software
Date:Jun 1, 1997
Previous Article:Multiplicative, congruential random-number generators with multiplier +/- 2(super k1) +/- 2(super k2) and modulus 2(super p) - 1.
Next Article:Remark on Algorithm 745.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters