Printer Friendly

Computing Poisson probabilities.

Computing Poisson Probabilities


We give an algorithm to compute the set of individual (nonnegligible) Poisson probabilities, needed for example in the applications pointed out in section 1.1. To get finite termination, we clearly have to truncate the Poisson distribution. Unlike previous contributions, we bound the mass in the truncated tails from above and the remaining individual probabilities from below. Given any "reasonable" [epsilon] (see section 3), we find a left truncation point L and a right truncation point R such that for a Poisson random variable N with parameter [lambda] we have P[L[is less than or =] N [less than or =] R] [is greather than or =] 1 - [epsilon] and R - L = O([square root][lambda]). Starting from a mode m = [lambda] we recursively compute weights w(m - 1), w(m - 2),..., w(L) and w(m + 1), w(m + 2), ...., w(R) such that for some constant [alpha] we have w(i) = [alpha]p(i) for L[less than or =] i [less than or equal to] R wher p(i) = P[N = i], [alpha] = W, and W = w(L) + ... + w(R). Our (new) lower bounds let us scale the weights, via the choice of w(m), to guarantee without repeated checking that no underflows or overflows occur. Our (new) upper bounds let us avoid estimating tail masses by summing estimated probabilities in the complementary region, a computation likely to be significantly contaminated by roundoff error. As far as we know, no other published algorithms for computing Poisson probabilities use rigorous, practical bounds such as ours, and so these algorithms are unsatisfactory. Such bounds do not seem readily accessible elsewhere.

The overall work and space complexities are both O([square root][lambda]), which let us handle all practical [lambda]'s. Our algorithm does not restrict [lambda]. Today]s desktop computers can easily handle [lambda] in the range 0 to 10.sup.10 say. We give a crude analysis of roundoff error that suggests that our method is numerically stable. Possibly w (i) / W may underflow for some i, but this is ordinarily irrelevant if the computations are properly arranged. For example, we can compute a weighted sum w(L)f(L) + ... + w(R)f(r) and (only) then divide the sum by W. Likewise, the overall error in a computation is ordinarily small even though the individual estimated probabilities are all a bit off. To illustrate, we note PROPOSITION 1. Let f be a real-valued function with '' f '' = sup [f (j): j = 0, 1, 2, . . .] and P[L [is less than or =] N [is less than or "] R] [is greater than or =] 1 - [epsilon]. In exact arithmetic,

Proof. It suffices to show that [sigma].sup.[infinity]/.sub.j = 0 / q (j) - p (j) / [greater than or =] 2[epsilon] where q(i) = w (i) / W = p (i) / [beta] = p (L) + ... + p(R). Now

For large [lambda], our L is [lambda] - O([square root][lambda]) and our R is [lambda] + O([square root][lambda]). this verifies that our overall work and space complexities are both O([square root][lambda] . In contrast, the 1982 IMSL routine asks the user to specify a parameter [kappa]--not necessarily related to [lambda]--and outputs estimates of p(0), p(1),..., p([kappa]). This amounts to setting L = 0. The [kappa] specified by any reasonable user will be O([lambda]). Thus, in practice the IMSL routine requires O([lambda]) work.

1.1 Motivation

Gross and Miller [9] and Fox [4] consider problems of this form, in which large values of [lambda] typically occur. Given the f(j)'s, the first summation in proposition 1 can be computed on O([square root][lambda]) time; without the left truncation, it would take O([lambda]) time. For the terminal-reward problem considered by Gross and Miller [9], the required f(j)'s can be computed in O([square root][lambda]) time using successive (matrix) squaring up to L as Fox [4] details. On the other hand, consider using the weights to compute Erlang's loss formula (e.g., see Gross and Harris with c servers as w(c)/[w(L)+ ... + w(c)] for L [is less than or =] c [is less than or] r. If L = 0, the error is zero (ignoring roundoff and assuming no underflow occurs) but computing the weights then takes O([lambda]) time. For L >0 we do not attempt a formal error analysis here, which would require lower as well as upper bounds on the left tail. Our bounds may indicate a reasonable choice of R.

For another example, suppose that we want to generate truncated Poisson variates by (efficiently-implemented) inversion or by the alias method. Simply scale standard uniform variates as u [left arrow] uW and them use the w(j)'s directly. Our algorithm reduces the setup time from O([lambda]) for a naive method (with no error bounds) to O([square root][lambda]) with bounds on truncatio nerror. Fox [5] bound the loss in coverage probability for confidence intervals constructed from truncated variates with any distribution. Bratley, Fox, and Schrage [2] argue that truncation avoids statistical anomalies and allows variate generation by (efficiently-implemented) inversion in O(1) marginal time, independently of the truncation point. Inversion is compatible with common random numbers and antithetic variates, but rejection methods typically are not.

1.2 Numerical Stability

The w(j)'s are slightly inaccurate due to roundoff error. They are all positive, so no cancellation errors occur when adding them to get W. Thus, the additional round-off error induced by adding the w(j)'s potentially troublesome only when there are many summands (large [lambda]). If we were simply to add from left to right say, then for example w([(m + R)/2]) [encircled + sign] W = W in finite-precision floating-point ([encircled + sign] = "add") arithmetic for large enough [lambda]. Here W = w(L) + . . . + w([m + R)/2] - 1). Thus, we will get W = W though w([(m + R)/2] + ... + w(R) is not negligible. The cure is to add small weights first. This widely applicable principle to attenuate round-off error is also apparently used in the 1982 IMSL routine to compute Poisson probabilities.

Recursive computation of weights starting from the mode is by itself not a new idea. What is new is its coupling with a priori determination of truncation points and a priori underflow checks. The obvious choice for w(m) is one. Given the computer's underflow and overflow thresholds, we typically choose w(m) far larger to avoid underflow worries.

Knusel [10, pages 1028-1029] gives a numerically-stable method to compute p(i) for any fixed i, though he gives no corrsponding error bounds. It would be inefficient to use that method to compute the set of probabilities [p(L),...,p(R)]. HE also gives a numerically-stable method to estimate the masses in each tail with prescribed relative error. These bounds can be transformed into corresponding bounds on absolute error. Whether such transformed bounds are looser or tighter than ours is an open question. Our bounds take O(1) time to compute, independelty of all other parameters. The time and space complexities to compute Knusel's bounds are unspecified and unclear. Our bounds on tail masses, as estimates of them, probably would have high relative error. In view of proposition 1, this seems irrelevant to the applications cited above; in other applications, however, small relative error may be important. As a check for programming errors, our w(m)/W should be approximately Knusel estimate of p(m).

1.3 Overview

In section 2, we give an algorithm to find the weights, which requires a subroutine to find L, R. and w(m) and to check that underflow will not occur. We sketch that subroutine in section 3, based on bounds in sections 4 and 5. Section 6 justifies the bounds in section 5. In section 7 we outline generalizations to other discrete distributions.


In this section we present an easily-programmed algorithm to compute the weights followed by an analysis of roundoff error.

Algorithm. WEIGHTER ([lambda], [epsilon], [tau], [omega]; L, R, w(L),..., w(R), W, F) Inputs: Poisson parameter [lambda] error tolerance [epsilon][is greater than or =]10-.sup.10 underflow threshold [tau] overflow threshold [omega] Outputs: left truncation point L right runcation point R weights w(L),...,w(R) total weight W flag F Subroutine reuired: FINDER ([lambda], [epsilon], [tau], [omega]; L, R, w(m), F)--see section 3 Comments: w(i) = p(i)W where p(i) = Poisson probability For [lambda][is greater than or =]400: mass lef of L [is less than =] [epsilon]/2 mass right or R [is less than or =] [epsilon]/2 For 0 < [lambda] < 400: mass left of L [is less than or =] [epsilon]/2 (note :L = 0 for [lambda][is less than or =] 25) mass right of R [is less than or =] [epsilon]/2 + 6 X 10.sup.12.[tau]/[omega]--see section 3 Section 3 explains where the 400 comes from. F = "true" if no underflow can occur while computing the weights F = "false" indicates that weights are not computed due to potential underflow w(i)/W may underflow even when F = "true" Initialize: Set m [left arrow] [[lambda]] (mode) Get L, R, w(m), and F from FINDER; if F = "false", exit. Down: Set j [left arrow] m While j > L, execute w(j - 1) [left arrow] (j/[lambda])w(j) j[left arrow]j - 1 Up: If [lambda] < 400, go to Special Else, set j [left arrow] m While j < R, execute w(j + 1) [left arrow] ([lambda]/(j + 1))w(j) j [left arrow] + 1 Compute W: [Comment: We want to compute W [left arrow] w (L) + ... + w(R)] [Comment: To attenuate roundoff, we add small terms first.] W [left arrow] 0 s [left arrow' L t [left arrow' R While s<t, execute If w(s)[is less than or =]w(r) then W [left arrow] W + w(s) s [left arrow] s + 1 else W [left arrow] W + w(t) t [left arrow] t - 1 W [left arrow] W + w(s) Exit. Special: If R > 600, set F = "flase" and then exti. [Comment: Underflow is possible but not certain; section 3 explains where the 600 comes from.] Else, set j [left arrow] m. While j < R, execute q [left arrow][lambda]/(j +1) If w(j) > [tau]/q, then set w(j + 1) [left arrow] qw(j) and j [left arrow] j + 1 else set R [left arrow] j go to Compute W Go to Compute W.

Computing each new weight takes two floating-point operations, accounting for the 2 in the exponents below. We define the relative roundoff error to be the maximum of the ratio of the computed result to the true result and the reciprocal of that ratio. With multiplication and division, bounds on relative roundoff errors multiply. Let u be the unit roundoff. Here u = 2.sup.1-b where b is the number of bits in the mantissas of the computer's floating-point numbers. The relative roundoff error accumulated in Down and Up is at worst O[(1 + u).sup.2(m-L).] and O[(1 + u).sup.2(R-m).] respectively, likely gross overestimates. For example, (1 + 10-.sup.7.).sup.1000. = 1.00010. The implicit proportionality factor associated with estimates of roundoff error depends on whether floating-point arithmetic chops or rounds. It is less than 1.01 in all cases, if we replace u by 10u.

Since Compute W involves only positive numbers, it follows (from Gill, Murray, and Wright [6, pages 11-12] for example) that the associated relative roundoff error is at worst O[(1+ u).sup.R-L.]. Consider using double precision to attenuate it. The actual roundoff error is probably much less than the bound, because we add small weights first. Pushing the principle of adding small terms first to the limit, we could put all (initial) summands in a heap, remove the smallest two, insert their sum in the heap, and so on, until the heap empties. In view of Glynn's [7] corollary 1.4 (showing roughly that the mass in the tails does not overwhelm the next summand) such elaborate measures seem unnecessary and the method in WEIGHTER looks good enough.

Even with that corollary, simply terminating when the current weight falls below a heuristic threshold would leave us with no rigorous error bound on the corresponding truncated tail masses. The 1982 IMSL routine outputs estimates of p(0), p(1),...,p([kppa] + 1) with [kappa] specified by the user. It does not check whether the user-specified [kappa] is reasonable; for example, for large [lambda], picking [kappa] = [[square root][lambda]] or [kappa] = [lambda].sup.2 is unreasonable. If [kappa] is at least of order, [lambda], then the IMSL routine spends most of its time computing estimates of negligible probabilities; it may have to rescale often to avoid underflow.

The roundoff error associated with L and R does not seem severe, but analyzing it would require specification of how the machine computes exponentials. As a hedge against roundoff, divide the nominal [epsilon] by 10 say. If the nominal [epsilon] is already small, we will see later that this hedge does not have a major impact on L and R.


We now give a recipe to find L and R, given [epsilon], as required by WEIGHTER. The heuristic choice w(m) = [omega]/10.sup.10.(R - 6) assures that W [is less than or =] [omega]/10.sup.10. The factor 10.sup.10 typically prevents overflow when the weights are subsequently used as in proposition 1 for example. To check for underflow, we scale the lower bounds in corollaries 3 and 4 by [omega]/10.sup.10.(R - L) before comparing them to [tau]. Based on the discussion below, writing a subroutine FINDER ([LAMBDA], [epsilon], [tau], [omega]; L, R, w(m), F) as required by WEIGHTER is straightforward.

1. If [lambda] = 0, then set L = R = 0 and F = "false".

2. If 0 < [lambda] < 25, then L = 0. If e.sup.-[lambda] < [tau], set F = "false" and exit. If 0 < [lambda] < 400, find R using corollary 1 of section 4 with [lambda] = 400. Increase k through the positive integers greater than 3 until the upper bound is less than [epsilon]/2. Set F = "true".

3. If [lambda] [is greater than or] 400, use corollary 1 with the actual [lambda] and proceed as above to find R. Evaluate the lower bound in corollary 3 of section 5 multiplied by [omega]/10.sup.10.(R - L) at the k corresponding to R. If the result is less than [tau], set F = "false" and exit. If [lambda] [is greater than or] 25, then find L using corollary 2 of section 4 with the actual [lambda]. Evaluate the lower bound in corollary 4 of section 5 multiplied by [omega]/10.sup.10.(R - L) at the k corresponding to L. If the result is greater than [tau], set F = "true"; else, set F = "false".

From inspection of corollaries 1 and 2, we see that the k's found above are 0([log(1/[epsilon] ].sup.1/2.), growing very slowly as [epsilon] decreases. In corollaries 1 and 2 the factor exp(- k.sup.2./2) dominates, when [lambda] [is greater than or] 25 say. At k = 7, this factor equals 6.2 X 10.sup.-11 approximatelly and we get R - L [is less than or =] 20 [square root][lambda]. Suppose that [epsilon] = 10.sup.-10.. One routinely checks that with k = 7, the bounds are less than 10.sup.-10 for [lambda] [is greater than or =] 25. If programming in a language like Fortran where storage for the weights must be assigned in advance, a generous rule of thumb is to allow max([20 [square root][lambda]],600) cells.

When [lambda] = 400, then corollary 1 applies for R [is less than or =] 600 which corresponds to k = 7. This explains the 600 above. If R is not reset in special, then the mass to its right is at most [epsilon]/2; otherwise, the mass to the right of R is at most [epsilon]/2 + 600 X 10.sup.10.[tau]/[omega] because of our choice for w(m).

While computing bounds, use the upper bounds on a.sub.[lambda]., b.sub.[lambda]., and d(k, [lambda]) given in section 4 and the lower bound on c.sub.m given in section 5. The remaining factor in any particular bound is an exponential, say exp(- g(k)). Multiply the bound on a.sub.[lambda].d(k, [lambda]), by.sub.[lambda]., or c.sub.m.[omega/10.sup.10.(R - L) by exp(- g(k) + [g(k)]). We assume (reasonably) that there is no underflow up to this point. To avoid subsequent underflow, multiply the result by e.sup.-1 as long as the current product is greater than [tau]e or until [g(k)] multiplications occur, whichever happens first. For corollaries 1 and 2, if underflow would occur with the next (hypothetical) multiplication, then the bound is less than [tau], hence acceptance providing [epsilon] > 2[tau] as seems reasonable. For corollaries 3 and 4, however, if underflow would occur, then [epsilon] is too small.

Again looking at what happens for k = 7, in corollary 3 th dominant factor is exp(- (k + 1).sup.2./2) [is greater than or =] 2.4 X 10.sup.-14 for [lambda] [is greater than or =] 25; in corollary 4, the dominent factor is exp(- k.sup.2./2 - k.sup.[is approx.]3./3 [square root][lambda]) [is greater than or =] 2.1 X 10.sup.-15 for [lambda] [is greater than or] 196. The discussion following corollary 4 indicates that for [kappa] = 7 we have to deal with bounds no smaller than 10.sup.-60 for [lambda] < 196.

Corresponding to k = 7, we get R - L [is less than or =] max([20 [square root][lambda]], 100). In any case, underflow occurs only if a lower bound is less than 10.sup.10.(R - L)[tau]/omega.

Consider [tau]/[omega] for typical computers. According to the 1982 VAX-11 Fortran reference manual, . 2-7, [tau]/[omega] [is approx.] 10.sup.-76. According to the 1982 CDC Fortran 5 reference manual, p. 1-5, [tau]/[omega] [is approx.] 10.sup.-615.. According to the 1981 Itel iAPX 86, 88 User's Manual, p. S-6, for the 8087 Numerical Data Processor, [tau]/[omega] [is approx.] 10.sup.-75 for single precision and [tau]/[omega [is approx.] 10.sup.-615 for double precision. Probably our checks for underlfow and overflow are superfluous in practice, but they are inexpensive to do and guarantee that, when passed, no problems can occur.

4. BOUNDING POISSON TAILS Let p[lambda](i) = e.sup.-[lambda].[lambda]i/i!, i [is greater than or =] 0 Q[lambda](i) = [sigma] p[lambda](j) T[lambda](i) = [sigma] p[lambda](i) [phi](x) = 1 / [square root]2[pi] e.sup.-x2/2 [phi](x) = [integral] [phi](t) dt a.sub.[lambda] = (1 + 1/[lambda])e.sup.1/16.[square root] 2 b.sub.[lambda] = (1 + 1/[lambda])e.sup.1/8[lambda]..

For [lambda] [is greater than or =] 25, we get a.sub.[lambda] [is less than or =] 1.57 and b.sub.[lambda] [is less than or =] 1.05.

Glynn [7] proves

PROPOSITION 2. Suppose [lambda] [is greater than or =] 2 and 2 [is less than or =] i [is less than or =] ([lambda] + 3)/2.

Then Q.sub.[lambda].(m + i) [is less than or =] a.sub.[lambda].(1 - exp(- 2i/9)).sup.-1.. [phi]((i - 3)/2)/ [square root]2[lambda]).

PROPOSITION 3. Suppose [lambda] [is greater than or =] 2 and i [is greater than or =] 2. Then T.sub.[lambda].(m - i) [is less than or =] b.sub.[lambda].[phi]((i - 3/2)/ [square root][lambda]).

We reparameterize these bounds with the substitutions (i - 3/2)/[square root]2 = k[square root][lambda] and i - 3/2 = k[square root][lambda] respectively. This gives Q.sub.[lambdah([m + k[square root]2[lambda] + 3/2] [is less than or =] a.sub.lambda].d(k, [lambda])[phi](k) T.sub.[lambda].([m - k=square root][lambda] - 3/2]) [is less than or =] b.sub.[lambda].[phi](k) where d(k, [lambda]) = 1/(1 - exp(-(2/9)[k[square root]2[lambda] + 3/2])) and t.[lambda].(j) = 0 for j < 0. For [lambda] [is greater than or =] 25 and k [is greater than or =] 3, we get d(k, [lambda]) [is less than or =] 1.007.

From Abramowitz and Stegun [1, page 932], we get

PROPOSITION 4. If x > 0, then [phi](x) [is less than or =] [phi](x)/x with error les than [phi](x)/x.sup.3..

Apply proposition 4 to Glynn's reparameterized bounds to get

COROLLARY 1. If [lambda] [is greater than or =] 2 and 1/2[square root]2[lambda] [is less than or =] k [is less than or =] [square root][lambda]/2[square root]2, then Q.sub.[lambda].([m + k [square root]2[lambda] + 3/2]) [is less than or =] a.sub.[lambda].d(k, [lambda])e.sup.-k2/2./k[square root]2[pi].

COROLLARY 2. If [lambda] [is greater than or =] 2 and k [is greater than or =] 1/[square root]2[lambda], then T.sub.[lambda].([m - k[square root][lambda] - 3/2]) [is less than or =] b.sub.[lambda].e.sup.-k2/2 /k=square root]2[pi].

Corollary 1 does not contradict the fact that, for large enough truncation points, the mass in the right Poisson tail is an order of magnitude greater than the mass in the corresponding normal tail. In corollary 1, the truncation point is at most [m + [lambda]/2 + 3/2].


We bound the Poisson probabilities p.sub.[lambda](i) from below to guarantee that, properly scaled, they do not underflow for L.sub.[lambda] [is less than or =] i [is less than or =] R.sub.[lambda].. By the monotonicity of p.sub.[lambda].(i) to the left and to the right of m = [[lambda]], it sufficies to check only p.sub.[lambda].(L.sub.[lambda].) and p.sub.[lambda].(R.sub.[lambda].). The programs Finder and Weighter use corollaries 3 and 4 below only for [lambda] [is greater than or =] 25. For 0 < [lambda] < 25, we set L.sub.[lambda] = 0 and R.sub.[lambda] = R.sub.400. The latter is justified since the mass in the right tail decreases with [lambda]. Weighter checks that R.sub.400 [is less than or =] 600; for [epsilon] corresponding to k 7, R.sub.400 = 600. It then assures that properly-scaled probabilities do not underflow, resetting R.sub.[lambda] if necessary. The error bound is then [epsilon]/2 + 10.sup.10.(R.sub.400 - R.sub.[lambda].)[tau]/[omega] [is less than or =] [epsilon]/2 + 6 X 10.sup.12.[tau]/[omega]. The second term is negligible when [epsilon] >> 10.sup.12.[tau]/[omega], which holds for [epsilon] = 10.sup.-10 and the computers considered in section 3.

Let C.sub.m = (1/[square root]2[pi]m)exp(m - [lambda] - 1/12m).

According to Feller [3, page 54], the following bound supplements Stirling's formula: n! < [square root]2[pi]n n.sup.n.e.sup.-n.e.sup.1/12n..

It readily follows that p.sub.[lambda](m) [is greater than or =] c.sub.m..

Section 6 proves

PROPOSITION 5. For i < 0, p(m + i) [is greater than or =] p.sub.[lambda].(m)exp(-i(i + 1)/2[lambda]) [is greater than or =] c.sub.m.exp(-(i + 1).sup.2./2[lambda).

PROPOSITION 6. For 0 < i [is less than or =] [lambda]/2,

(i) p.sub.[lambda].(m - i) [is greater than or =] p.sub.[lambda].(m)exp(- i(i - 1) / 2[lambda] - i(i - 1)(2i - 1) / 6[lambda].sub.2) [is greater than or =] c.sub.m.exp(- i.sub.2 / 2[lambda] - i.sup.3 / 3[lambda].sup.2.).

(ii) For 0 < i [is less than or =] m, p.sub.[lambda].(m - i) [is greater than or =] c.sub.m.[1 - i / m + 1].sup.i..

COROLLARY 3. Let k.sup.[kappa] = k[square root]2 + 3/2[square root][lambda]. Then for k < 0, p.sub.[lambda].([m + k[square root]2[lambda] + 3/2]) [is greater than or =] p.sub.[lambda].([m + K.sup.[kappa].[square root][lambda]]) [is greater than or =] c.sub.m.exp(-(k.sup.[kappa] + 1).sup.2./2).

COROLLARY 4. Lat k.[is approx.] = k + 3/2[square root][lambda].

(i) For 0 < k.sup.[is approx.] [is less than or =] [square root][lambda]/2, p.sub.[lambda].([m - k[square root][lambda] - 3/2]) = p.sub.[lambda]([m - k.sup.[kappa].[square root][lambda]]) [is greater than or =] c.sub.m.exp(- k.sup.[is approx.]2./2 - k.sup.[is approx.]3./3[square root][lambda]).

(ii) For k.sup.[is approx.] [is less than or =] ([square root]m + 1)/m, p.sub.[lambda].([m - k[square root][lambda] - 3/2]) [is greater than or =] p.sub.[lambda].([m - k.sup.[is approx.][square root]m + 1]) [is greater than or =] c.sub.m.(1 - kappa / =square root] m + 1).sup.Kappa [square root] m+1

(iii_ For k.sup.[is approx.] [is less than or =] ([square root]m + 1)/m, p.sub.[lambda].([m - k[square root][lambda] - 3/2]) [is greater than or =] p.[lambda].(0) = e.sup.-[lambda]..

We suggest using (i) when applicable; the bound is then at least c.sub.m.exp(-2k.sup.[is approx.]2./3). If only (ii) and (iii) apply, compute both bounds and use the maximum. Since for m large (1 - kappa / [square root]m + 1).sup.kappa[square root]m+1.[is approx.] e.sup.-kappa2., computing the left side is numerically stable. For example, with m = 63 and k.sup.[kappa] = 7, (i) does not apply and (1-7/8).sup.56 = 2.6 X 10.sup.-51 =see(ii)] e.sup.-49 = 5.2 X 10.sup.-22 [see(*)] e.sup.-63 = 4.4 X 10.sup.-28 [see (iii)]. Convergence in (*) is glacial. For [lambda] [is greater than or =] 25, we get c.sub.m [is greater than or =] 1/5e[square root]2[pi]m [is greater than or =] 0.02935/ [square root]m.


We use the following known facts: log(1 + x) [is less than or =] x for x [is greater than or =] 0 log(1 - x) [is less than or =] -x - x.sup.2 for 0 [is less than or =] x [is less than or =] 1/2.

Right of mdoe: p(m + i) v p(m)exp(- [sigma] log[(m + k)/[lambda]] [is greater than or =]p(m)exp(- [sigma] log[1 + k/[lambda]]) [is greater than or =] p(m)exp(- [sigma] log[1 + k/[lambda]]) [is greater than or =] p(m)exp(- [sigma] k/[lambda]) = p(m)exp(-i(i + 1)/2[lambda]).

Left of mdoe: p(m - i) = p(m)exp([sigma] log[(m - k + 1)/[lambda]]) [is greater than or =] p(m)exp([sigma] log[1 - k/[lambda]]) [is greater than or =] p(m)exp(- [sigma] (k / [lambda] + k.sup.2 / [lambda].sup.2.)) [is greater than or =] p(m)exp(i(i - 1) / 2[lambda] - i(i - 1)(2i - 1 /6[lambda].sup.2.). p(m - i) [is greater than or =] p(m)(m - i + 1 / [lambda]).sup.i [is greater than or =] c.sub.m.(m - i + 1 / m + 1).sup.i = c.sub.m.[1 - i / m + 1].sup.i..


Our bounds probably can be tightened by mroe intricate analysis. As they stand, they seem good enough for practical purposes. There are similar tradeoffs between complexity and tightness of error bounds for other discrete distributions such as the binomial and hypergeometric distributions. Finding good tradeoffs for such distributions is a subject for future research. Given appropriate bounds, a good strategy to compute the set of individual, nonnegligible probabilities from these distributions are similar to our strategy for the Poisson distribution:

1 choose an appropriate weight for the mode or the mean

2. find appropriate truncation points L and R from upper bounds on the tails, possibly using a counterpart to special

3. find lower bounds on the individual probabilities and check for underflow at L and at R

4. compute weights recursively outwards to L and to R

5. compute the total weight by adding smallest terms first, i,e., inwards.

In this paper we focused on the Poisson distribution because it is ost relevant to our interests (see 1.1) among the discrete distributions for which the set of nonnegligible individual probabilities is nontrivial to compute.
COPYRIGHT 1988 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1988 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research contributions; Management science and operations research
Author:Fox, Bennett L.; Glynn, Peter W.
Publication:Communications of the ACM
Date:Apr 1, 1988
Previous Article:Relational database design using an Object-Oriented Methodology.
Next Article:Dynamic hash tables.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters