A sample of brilliance.
(It is important that no number appear twice; holding more than one ace of spades can jeopardize a cardplayer's health.) Random samples also arise in applications such as simulation, program testing, and statistics.
The first section of this column reviews several standard algorithms for random sampling. The next section describes an elegant new algorithm by Floyd. The third section describes how Floyd extends his algorithm to generate random permutations.
A Sampling of Sampling Algorithms
Before we can generate a random sample, we have to be able to generate a single random number. That topic is worth a column all its own, so we will simply assume that we have a function RandInt (I, J) that returns an integer uniformly distributed over I..J.
It is easy to generate a random sequence of M integers in the range 1..N, so long as we don't mind duplicates for I := 1 to M do print RandInt (1, N) When I set M to 12 and N to 3, that code produced the sequence 3 1 3 3 1 1 1 2 1 1 3 1
This very sequence might come in handy for your next tough game of rock-paper-scissors; more serious applications include testing finite state machines and sorting programs.
Many applications, though, require a random sample without duplicates. A statistical analysis, for instance, might waste work by observing the same element twice. Such samples are often referred to as "samples without replacement" or as "combinations"; for the remainder of this column, though, the word "sample" will denote a random sample with no duplicates.
The December 1984 column described several sampling algorithms. The majority of the algorithms were based on the pseudocode in Algorithm S, which stores the sample in the set S. initialize set S to empty Size := 0 while Size < M do T := RandInt (1, N) if T is not in S then insert T in S Size := Size + 1
ALGORITHM S. A Typical Sampling Algorithm
If the set is implemented correctly and if RandInt produces random integers, then Algorithm S produces a random sample. That is, each M-element subset is produced with probability 1(.sup.N./M.). The loop invariant is that S always contains a random sample of Size integers in the range 1..N.
There are four operations on S: initializing it to empty, testing an integer for membership, inserting a new integer, and printing all the members. The December 1984 column sketched five data structures that could be used to implement the set S: bit vectors, arrays (sorted and unsorted), binary search trees, and bins. In the May 1986 column, Don Knuth presented a "literate" sampling program that implements S as an ordered hash table; David Gries described the same data structure using abstract data types in the April 1987 column.
Algorithm S has many virtues: it is correct, fairly efficient, and remarkably succinct. It is so good, as a matter of fact, that I thought one couldn't do better. In the December 1984 column I therefore charged ahead and described it in detail.
Unfortunately, I was wrong; fortunately, Bob Floyd caught me sleeping. When he studied Gries's implementation of Algorithm S in the April 1987 column, he was bothered by a flaw that is displayed clearly when M = N = 100. When size = 99, the set S contains all but one of the desired integers. The algorithm closes its eyes and blindly guesses integers until it stumbles over the right one, which requires 100 random numbers on the average. That analysis assumes that the random number generator is truly random; for some non-random sequences, the algorithm won't even terminate.
Floyd set out to find an algorithm that uses exactly one call of RandInt for each random number in S. The structure of Floyd's algorithm is easy to understand recursively: to generate a 5-element sample from 1..10, we first generate a 4-element sample from 1..9, and then add the fifth element. The recursive program is sketched as Program F1.
We can appreciate the correctness of Floyd's alogorithm anecdotally. When M = 5 and N = 10, the algorithm first recursively computes in S a 4-element random sample from 1..9. Next it assigns to T a random integer in the range 1..10. Of the 10 values that T can assume, exactly 5 result an inserting 10 into S: the four values already in S, and the value 10 itself. Thus element 10 is inserted into the set with the correct probability of 5/10. The next section proves that Algorithm F1 produces each M-element sample with equal probability. function Sample(M, N) if M = 0 then return the empty set else S := Sample(M-1, N-1) T := RandInt(1, N) if T is not in S then insert T in S else insert N in S return S
Algorithm F1. Floyd's Algorithm--Recursive
Because Algorithm F1 uses a restricted form of recursion. Floyd was able to translate it to an iterative form by introducing a new variable, J. (Problem 8 addresses recursion removal in more general terms.) The result is Algorithm F2, which is much more efficient than Algorithm S, yet is almost as succinct. Problems 2 and 3 address the data structures that might be used to implement the set S.
For those who might scoff that Algorithm F2 is "just pseudocode," Program F2 implements Floyd's algorithm in the Awk language. The June 1985 column sketched initialize set S to empty for J := N - M + 1 to N do T := RandInt (1, J) if T is not in S then insert T in S else insert J in S
ALGORITHM F2. Floyd's Algorithm--Iterative that language, with particular emphasis on its associative arrays (which are used to implement set in Program F2). Complete with input and output, the program requires only nine lines of Awk. Begin [ m = ARGV; n = ARGV for (j = n-m+1; j <= n; j++) [ t = 1 + int(j * rand()) if (T in S) s[j] = 1 else s[t] = 1 ] for (i in s) print i ]
PROGRAM F2. Floyd's Algorithm in Awk
Some programs that use a random sample require that the elements of the sample occur in a random order. Such a sequence is called a random permutation without replacement of M integers from 1..N. In testing a sorting program, for instance, it is important that randomly generated input occur in random order (if the input were always sorted, for instance, it might not fully exercise the sort code).
We could use Floyd's Algorithm F2 to generate a random sample, then copy it to an array, and finally shuffle the elements of the array. This code randomly scrambles the array X[1..M]: for I := M downto 2 do Swap (X[RandInt(1, I)], X[I])
This three-step method uses 2M calls to RandInt.
Floyd's random permutation generator is very similar to Algorithm F2; it is shown in Algorithm P, next page. To compute an M-element permutation from 1..N, it first computes an (M -- 1)-element permutation from 1..N -- 1; a recursive version of the algorithm eliminates the variable J. The primary data structure of Algorithm P, though, is a sequence rather than a set. initialize sequence S to empty for J := N - M + 1 to N do T := RandInt(1, J) if T is not in S then prefix T to S else insert J in S after T
ALGORITHM P. Floyd's Permutation Algorithm
Problem 5 shows that Algorithm P is remarkably efficient in terms of the number of random bits it uses; Problem 6 deals with efficient implementations of the sequence S.
We can get an intuitive feeling for Algorithm P by considering its behavior when M = N, in which case it generates a random permutation of N elements. It iterates J from 1 to N. Before execution of the loop body, S is a random permutation of the integers in 1..J -- 1. The loop body maintains the invariant by inserting J into the sequence; J is the first element when T = J, otherwise J is placed after one of the J -- 1 existing elements at random.
In general, Algorithm P generates every M-element permutation of 1..N with equal probability. Floyd's proof of correctness uses the loop invariant that after i times through the iteration, J = N -- M + i and S can be any permutation of i distinct integers in 1..J, and that there is a single way that the permutation can be generated.
Doug McIlroy found an elegant way to phrase Floyd's proof: there is one and only way to produce each permutation, because the algorithm can be run backward. Suppose, for instance, that M = 5, N = 10, and the final sequence is 7 2 9 1 5
Because 10 (the final value of J) does not occur in S, the previous sequence must have been 2 9 1 5 and RandInt returned T = 7. Because 9 (the relevant value of J) occurs in the 4-element sequence after 2, the previous T was 2. Problem 4 shows that one can similarly recover the entire sequence of random values. Because all random sequences are (supposedly) equally likely, all permutations are also.
We can now prove the correctness of Algorithm F2 by its similarity to Algorithm P. At all times, the set S in Algorithm F2 contains exactly the elements in the sequence S in Algorithm P. Thus each final M-element subset of 1..N is generated by M! random sequences, so all occur equiprobably.
Algorithm S is a pretty good algorithm, but not good enough for Bob floyd. Not content with its inefficiency, he developed optimal algorithms for generating random samples and random permutations. His programs are a model of efficiency, simplicity, and elegance. The "Further Reading" section sketches some of the methods that Floyd uses to achieve such programs.
1. How do the various algorithms behave when the Randint procedure is nonrandom? (Consider, for instance, generators that always return 0, or that cycle over a range that is much smaller than or much greater than M or N.)
2. Describe efficient representations for the set S in Algorithm F2.
3. Algorithms S and F2 both use a set S. Is a data structure that is efficient in one algorithm necessarily efficient in the other?
4. Complete the proof of correctness of Algorithm P by showing how to compute from a final permutation the sequence of random values that produced it.
5. How many random bits does Algorithm P consume? Show that this is close to optimal. Perform a similar analysis for Algorithm F2. Can you find algorithms that are more efficient?
6. Describe representations for the sequence S such that Algorithm P runs in O(M) expected time or in O(M log M) worst-case time. Your structures should use O(M) worst-case space.
7. Implement Floyd's algorithms in your favorite programming language.
8. Algorithm F2 is an iterative version of the recursive Algorithm F1. There are many general methods for transforming recursive functions to equivalent iterative programs (one method is often illustrated on a recursive factorial function). Consider a recursive function with this form function A(M) if M = 0 then return X else S := A(M- 1) return G(S, M) where M is an integer, S and X have the same type T, and function G maps a T and an integer to a T. Show how the function can be transformed to this iterative form functio B(M) S := X for J := 1 to M do S := G(S, J) return S
Solutions to July's Problems
1. The problem can be rephrased as asking how many assignments this routine makes after the array X[1..N] has been sprinkled with random real numbers. Max := X for I := 2 to N do if X[I] > Max then Max := X [I]
A simple argument assumes that if statements are executed about half the time, so the program will make roughly N/2 assignments. I profiled the program for ten runs with N = 1000, and the number of assignments were, in sorted order: 4, 4, 5, 5, 6, 7, 8, 8, 8, 9. In Section 1.2.10 of Fundamental Algorithms, Knuth shows that the algorithm makes H.sub.N --1 comparisons, on the average, where H.sub.N = 1 + 1/2 + 1/3 + ... + 1/N is the Nth harmonic number. For N = 1000, this analysis gives an expectation of 6.485; the average of the ten experiments is 6.4.
2. This C program implements a Sieve of Eratosthenes for computing all primes less than n. main() [ int i, p, n; char x; 1 n = 100000; 1 for (i = 1; i <= n; i++) 100000 x[i' = 1; 1 x = 0; x[n+1] = 1; 1 p = 2; 9593 while (p <= n) [
9592 printf("%d n", p);
9592 for (i=2*p; i<=n; i+=p) 256808 x [i] = 0; 9592 do 99999 p++; 99999 while (x[p] == 0); ] ]
The profile shows that there are 9592 primes less than 100,000, and that the algorithm made about 2.57N assignments. In general, the algorithm makes about N log log N comparison: the analysis involves the density of prime numbers and the harmonic numbers mentioned in Solution 1. For faster implementations of prime sieves, see the Communications papers by Mairson (September 1977), Gries and Misra (December 1978), and Pritchard (January 1981).
3. A simple statement-count profiler increments a counter at each statement. One can decrease memory and run time by making do with fewer counters. For instance, one might associate a counter with each basic block in the program's flow graph. One can further reduce counters by "Kirchoff's first law"; if you have a counter for an if-then-else statement and one for the then branch, then you don't need one for the else branch.
6. Problem P6 is a correct program that is hard to prove correct. The f or loop in function prime could give an infinite loop. To show that it always terminates, one must prove that if P is a prime, then there is another prime less than P.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||in Programming Pearls|
|Author:||Bentley, Jon; Floyd, Bob|
|Publication:||Communications of the ACM|
|Date:||Sep 1, 1987|
|Next Article:||The 1984 Olympic message system: a test of behavioral principles of system design.|