# How often should you clean your room?

We introduce and study a combinatorial optimization problem motivated by the question in the title. In the simple case where you use all objects in your room equally often, we investigate asymptotics of the optimal time to clean up in terms of the number of objects in your room. In particular, we prove a logarithmic upper bound, solve an approximate version of this problem, and conjecture a precise logarithmic asymptotic.Keywords: search with cleanup, combinatorial optimization, coupon collector's problem, sequential occupancy, Stirling numbers of the second kind

Introduction

Suppose you have n objects in your room which are totally ordered. For simplicity, let us say they are books on shelves alphabetized by author and title. If you are looking for a book (assume you remember the author and title, but not its location on the shelves), the most efficient algorithm is a binary search. Namely, look at the book in the middle of the shelf, and because of the ordering, now you can narrow your search by half. Repeat this process of halving your search list, and you can find your book in about [log.sub.2] n steps. (Here is perhaps a better model for humans naturally search: go to where you think the book should be, scan that area, and if need be jump to a different area based on the ordering. However a logarithmic cost still seems like a good model for this process.)

The theory of searching (and sorting) algorithms is of course well studied in computer science--what is not, however, is what happens after that for humans. Namely, after you are done with your book, you can do one of two things: either put it back on the shelf, which we will also say takes about [log.sub.2] n time, or leave it on your desk, which takes no time. The latter is of course more efficient now, but if you keep doing this, eventually all of your books will wind up as an unsorted pile on your desk. Then when you search for a book, you essentially have to go through your pile book by book (a sequential, or linear, search), which takes about n/2 time, and thus is not very efficient for n large.

The question we are interested in here is: when is the optimal time to clean up? That is, over the long run, what is the optimal value [m.sub.opt] (n) of m (1 [less than or equal to] m [less than or equal to] n) at which you should put all the books in the pile back, in order, on the shelf, in the sense that the the average search plus cleanup cost (per search) is minimized. Here we assume the cleanup algorithm is to simply go through the pile, book by book, and find the right location for each book on the shelves via a binary search (see Remark 1.2 for a discussion of other cleanup algorithms).

The paper is organized as follows. (See Section 1.3 for a more detailed overview.) In Section 1, after first formulating this problem precisely, we will discuss four different models and focus on the (generally unreasonable) case of the uniform distribution, i.e., where you use all objects in your room equally often. It might be more realistic to consider a power law distribution, but even the simple case of the uniform distribution is not so easy. The different models correspond to having either complete or no memory of what is in the pile, and having numbered shelves (each object has a designated location on the shelves) or unnumbered shelves (only the relative order of books is important).

In Section 2, we analyze the search and cleanup cost functions in some detail for each of these models. Our first result is that, in each of these models, one should not clean up immediately (see Proposition 1 below). In fact, if n is small enough, one should never cleanup (see Remarks 4.6 and 4.7). In Section 3, we restrict ourselves to complete memory with numbered shelves for simplicity, and prove that one should clean up before about 4 [log.sub.2] (n) objects are in the pile (see Proposition 2). A good lower bound for the mopt (n) is not so easy, and so we instead consider an approximate problem in Section 4. Based on the analysis from Section 2, we expect the optimal value [m.sub.opt] (n) of m for the approximate problem to be a lower bound for [m.sub.opt](n). We essentially determine exactly the optimal value of m for the approximate problem (Theorem 3), which is about 3 [log.sub.2](n), and then based on numerics conjecture that [m.sub.op]t(n) ~ 3 [log.sub.2](n) (Conjecture 4). In fact we expect that for all four models with arbitrary distributions, [m.sub.opt](n) grows no faster than 4 [log.sub.2](n). Therefore, we humbly suggest you clean your room before 4 [log.sub.2](n) objects are out.

Since we use a fair amount of (often similar looking) notation, we provide a notation guide at the end for convenience (Appendix A).

1 General Setup

1.1 The Statement of the Problem

We now make a general formulation of our problem, which we call a search with cleanup optimization problem.

Let X = {[X.sub.1],... , [X.sub.n]} be a finite set of distinct well-ordered objects, which we view as a probability space with probability measure [mu]. We consider the following discrete-time Markov chain, depending on a parameter 1 [less than or equal to] m [less than or equal to] n.

1. At time t = 0 each [X.sub.i] is in a sorted list(i) L, and there is an unsorted pile P which is empty.

2. At any time t [member of] {0,1,2,... }, each X [member of] X is in exactly one of L and P, i.e., X is a disjoint union X = L [union] P.

3. At any time t [greater than or equal to] 1 with |P| < m, exactly one object X = [X.sub.i] is selected, and X is selected with probability [mu](X). If the selected X [member of] P, nothing changes. Otherwise, then X is removed from L and added to P.

4. At any time t, if |P| = m, we stop the process.

This process has finite stopping time with probability 1 provided at least m elements of X have nonzero probabilities, which we will assume.

Note the state of the process at time t is described simply by a subset P of X, together with a marked element [X.sub.it] (the object selected at time t [greater than or equal to] 1). The set of possible states is then simply all subsets P of X, together with a marked point [X.sub.it], of cardinality at most m.

Associated to this process are two (nonnegative) cost functions, S(X; ) and C(P), which do not depend upon t. Here X [member of] X and P [subset] X. The functions S(X; P) and C( P ) are called the search and cleanup costs.

Let [X.sub.m] = [X.sub.m,n] denote the set of finite sequences [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in Xsuch that (i) the underlying set [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] has cardinality m, and (ii) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We extend the measure [mu] to a probability measure on [X.sub.m] by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1)

Note the sequences [chi] [member of] [[chi].sub.m] are in 1-1 correspondence with the possible paths of finite length for the Markov process from the initial state up to the stopping state described in Step 4. Namely, for t = 0,... , l, let [P.sub.[chi]](t) denote the set of elements [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (here [P.sub.[chi](0) = 0). Thus [P.sub.[chi]](t) represents the "unmarked" state of the process from time t = 0 until the stopping time t = l Furthermore each [mu]([chi]) is the probability of that path for the process.

For example, suppose n [greater than or equal to] 3, m = 3 and [chi] = ([X1, X2, X1, X3). This corresponds to selecting X1 at time 1, X2 at time 2, X1 at time 3, and [X.sub.3] at time 4, after which the pr.sub.o]cess stops, since we have selected m = 3 distinct objects. Specifically, at t = 1 we have P = [P.sub.[chi]](1) = {[X.sub.1]}; at time t = 2 we have P = [P.sub.[chi]](2) = {[X.sub.1], [X.sub.2]}; at time t = 3, we have P = [P.sub.[chi]](3) = {[X.sub.1], [X.sub.2]}; (unchanged); and at the stopping time t = 4, we have P = [P.sub.[chi]](4) = {[X.sub.1], [X.sub.2], [X.sub.3]}. If [mu] is the uniform distribution on X, the probability of this path is [mu]([chi]) = [1/[n.sup.4]].

Given [chi] [member of] [[chi].sub.m], we let l([chi]) be its length, i.e., the corresponding stopping time, and write [chi] = ([[chi].sub.1], [[chi].sub.2],... , [[chi].sub.l]) where l = l([chi]).

Now we extend S(X; P) and C(P) to [chi] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2)

and

C([chi]) = C([P.sub.[chi]]l([chi])). (1.3)

These values are called the total search and total cleanup costs for the path [chi].

We want to optimize the average total cost function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.4)

Assume [mu](X) [not equal to] 0 for each X [member of] X.

Problem. Given a model M = (X,[mu], S, C), determine the value [m.sub.opt](n) = [m.sub.opt](n;.M) of m [member of] {1,2,... ,n} that minimizes F(m; n).

In the event that there is more than one such minimizing m--which we do not typically expect--we may take, say, [m.sub.opt] (n) to be the smallest such m, so that [m.sub.opt] (n) is a well-defined function.

Here we will study the asymptotic behavior in the simple case of [mu] being a uniform distribution on X as n = |X| [right arrow] [infinity] for certain cost functions S and C specified below. We note the Markov process we consider arises in the coupon collector's (or birthday) problem, and more generally, sequential occupancy problems (see, e.g., [6] or [4]). The cost functions, however, make the analysis of this problem much more delicate than occupancy problems typically studied in the literature. It turns out that the expected value of the reciprocal of the waiting time in a sequential occupancy problem plays a key role in our analysis. Several results for the expected value of the waiting time itself are known (e.g., see [1]), but not, to our knowledge, for its reciprocal.

1.2 Models and Cost Functions

From now on, we assume [mu] is the uniform distribution on X unless explicitly stated otherwise. There are four reasonable, simple search models to consider, all based on doing a binary search on C and a sequential search on P. Here we view P as an unordered set. The models depend upon whether the positions of L (the "shelves") are numbered or not and whether the process is memoryless or not. These models correspond to the following search algorithms A for an element X of L [union] P.

For a memoryless process, at any time t, we assume we do not know what elements are in P, i.e., we do not remember the state of the system. Thus it is typically worthwhile to search L first, as searching L is much more efficient than searching P. Hence for a memoryless process, we will always first search L for X. If this search is unsuccessful (i.e., X [member of] P), then we search P.

At the other extreme, one can consider the process where one has complete memory, i.e., at any time t, we know the state P of the system. Thus if X [member of] L, we simply search L, and if X [member of] P, we only search P. The other option in the model depends on the data structure for L. Imagine [X.sub.1],... , [X.sub.n] are books with a total ordering. The [X.sub.i]'s in L are the books that are ordered on bookshelves, whereas the [X.sub.i]'s in P lie in an unorganized pile on a desk. If there is a marking on each shelf for the corresponding book, so each book has a well defined position on the shelf, we say the shelves are numbered. In this case, we think of L as a list of size n indexed by keys [k.sub.1] < [k.sub.2] < [??] < [k.sub.n], where [k.sub.i] points to [X.sub.i] if [X.sub.i] [member of] L, and [k.sub.i] points to null if [X.sub.i] [member of] P, and a search on L, amounts to a search on n keys, regardless of how many objects X actually remain in L. Otherwise, the shelves are unnumbered, so only the relative position of the books on the shelves is important (akin to books shelved in a library stack). Here we simply view L as a sorted binary tree, and a search on L is really a search on the |L| objects in L.

While shelf positions are not typically numbered for books, this situation of "numbered shelves" commonly occurs in other situations, such as a collection of files each in their own labelled folder jacket. Namely, you may take out a file to look at, but leave the folder jacket in place so there is a placeholder for where the file goes when you put it back.

With these models in mind, the four search algorithms A for an object X in L [union] P can be described as follows.

* [M.sub.1] (No memory, unnumbered shelves) A: do a binary search on the |L| objects in L; if this fails, then do a sequential search on P

* [M.sub.2] (No memory, numbered shelves) A: do a binary search on the n keys to find the correct position for X in L; if it is not there, do a sequential search on P

* [M.sub.3] (Complete memory, unnumbered shelves) A: if X [member of] L, do a binary search on the |L| objects in M ; if X [member of] P, do a sequential search on P

* [M.sub.4] (Complete memory, numbered shelves) A: if X [member of] L, do a binary search on the n keys for L; if X [member of] P, do a sequential search on P

Each of these algorithms naturally gives rise to a search cost function S(X; P) where X [member of] L [union] P, namely the number of comparisons needed in this algorithm. However, it is not necessary for us to write down these functions explicitly. Rather, it suffices to explicate the following average search cost functions. (In fact, one could replace the exact search cost S(X; P) by a certain average search cost and be left with the same optimization problem--see Section 5.)

Let [s.sub.L](j) denote the average cost of a search for an object in L when L contains n - j elements (we average over both the n choose n - j possibilities for C and the n - j possibilities for the object). Similarly, let [s.sub.P](j) denote the average cost of a search for an object in P given P contains j objects (again averaging over all possibilities for P and the object).

We define the following average search cost functions for successful binary, failed binary and sequential searches on j objects:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.5)

The formula for s(j) is of course exactly the expected number of steps required for a successful sequential search. It is easily seen that when j + 1 is a power of 2, b(j) (resp. bf (j)) is the exact expected number of steps required for a successful (resp. failed) binary search on j objects. These functions are not quite the exact average number of steps for binary searches for all j (they are not generally rational), but as we are primarily interested in asymptotic behavior, we will work with the functions given above for simplicity.(ii) Note that b([2.sup.r+1] - 1) - b([2.sup.r] - 1) < 2 and is in fact close to 1 for large r. So b(n) in general is a reasonable approximation of the expected cost for a successful binary search.

Then, for the above four algorithms A, the functions [s.sub.L](j) and [s.sub.P](j) are given as follows.

* [M.sub.1] (No memory, unnumbered shelves)

[s.sub.L](j) = b(j) [s.sub.P](j) = [b.sub.f](n - j) + s(j) (1.6)

* [M.sub.2] (No memory, numbered shelves)

[s.sub.L](j) = b(n) [s.sub.P](j) = [b.sub.f](n) + s(j) (1.7)

* [M.sub.3] (Complete memory, unnumbered shelves)

[s.sub.L](j) = b(j) [s.sub.P](j) = s(j) (1.8)

* [M.sub.4] (Complete memory, numbered shelves)

[s.sub.L](j) = b(n) [s.sub.P](j) = s(j) (1.9)

Remark 1.1 If [mu] were a highly skewed distribution, then it might be more efficient in the no memory models to do the pile search before a list search (see Section 5).

We now define our cleanup cost functions, based on the simple algorithm of doing a binary search for each object in P to find the appropriate position to insert it into L. (Even if one remembers the general area where the object should go, there is still the time needed to identify/arrange the exact spot and the time to physically place it there, and a logarithmic cost seems like a reasonable model for this.) This leads to two different possible cleanup cost functions, corresponding to the cases of numbered and unnumbered shelves.

If the shelves are numbered, then the cleanup cost should just be the search cost to find the correct position for each object in P, and it makes sense to set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where S(X; [??]) denotes the search cost to find the position in L for X. Note that there is no dependence upon what order we replace the objects. However, we can make things a little easier on ourselves if we wish. Since we will just be considering an average of C(P) over [chi] (weighted by 1/l([chi])), it will suffice to consider an average cleanup cost

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hence we have

[C.sub.m] = m*b(n), M [member of] {[M.sub.2], [M.sub.4]}. (1.10)

If the shelves are unnumbered, then the cleanup cost in fact depends upon the order we replace the objects. Let us write P = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and suppose we place them back in order [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Write [S.sub.L](X) for the cost of a (failed if X [??] L) binary search on L for the object X. Then the order-dependent cleanup cost is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since P is unordered, we consider all cleanup orderings to occur with the same probability. Hence it suffices to consider an average over all possible orderings:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where ([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) runs through all possible orderings of P.

As before, since we will be taking an average of our cleanup costs over [chi] (weighted by 1/l([chi])), we can consider the simpler quantities

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

as in the numbered case. By additivity of the expected value, one sees

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.11)

As with S(X; P), we could replace the exact cleanup cost C(P) with its average over all subsets of size P (cf. (5.2)).

Remarks

1.2 This is not the only reasonable way to clean up. One could first sort the objects in P, which can be done in O(m log m) time, though the way humans naturally sort is perhaps better modeled by insertion sort, which takes O([m.sup.2]) time. Then one can merge L and P in O(n) steps, as in a linear merge sort. This is more efficient than our above algorithm if m is relative large and one efficiently sorts P. Since our optimization problem is one in which m should be at most logarithmic in n (cf. Proposition 2 and Remark 1.6), our cleanup algorithm above is more efficient.

1.3 Alternatively, one could do a binary-search-based merge sort after sorting P as follows. Say the ordering on X is [X.sub.1] < [X.sub.2] < [??] < [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be the elements in P in sorted order, i.e., [j.sub.1] < [j.sub.2] < [??] < [j.sub.m]. First do a binary search to insert [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in L. Then do a binary search on [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] to find the position for [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Continue in this manner of binary searches on smaller and smaller subsets of L, to replace all m objects. This may more be efficient than the cleanup algorithm we are using, depending on how we sort P and the relative size of m and n, and it may be interesting to study our optimization problem with this type of algorithm. However, it is only slightly more efficient when m is relatively small compared to n: suppose m [approximately equal to] log n and one does an insertion sort on P; the insertion sort alone takes O([log.sup.2] n) time, which is the same order as our original cleanup algorithm. In light of the additional complications it brings, we do not consider this type of cleanup here.

1.4 One could also consider partial cleanups, where one does not put back all objects at the same time, but only some of the items in P. We do not wish to consider such complications here. Moreover, as it typically takes time and effort for humans to switch between tasks, there seems to be extra efficiency in practice if one clean up all at once (or in a few chunks), than in many small steps.

1.5 This model assumes all objects are in relatively close proximity, as in your room. If one wanted to consider a similar problem for objects in large library or warehouse, one should include the cost of transit time for retrieving and putting back the objects in the functions [s.sub.L] and C(P). The transit time should be O([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) assuming the objects are organized in a 2-dimensional grid, or at least 3-dimensional with bounded height.

1.3 Overview

Intuitively, there are three reasons why it may be better to wait to cleanup, i.e., why [m.sub.opt] (n) might be greater than 1. Assume n is large.

(i) If one has complete memory and there are relatively few objects in the pile, the search cost for an object in the pile will be less than the search cost for a random object in the list.

(ii) If the shelves are not numbered and there are relatively few objects in the pile, one will almost surely be searching for objects which are in the sorted list, and this will go slightly faster if there are less than n objects in the list.

(iii) In all four of the above models, the average cleanup cost per search should decrease as m increases.

Thus in the case of complete memory, it is rather evident that we should have [m.sub.opt] > 1. On the other hand, in the case of no memory, if one searches for an object in the pile, one first has to do a binary search on the list, which costs more than just searching for a random element in the list. So in the case of no memory, unnumbered shelves, it is not a priori obvious whether this factor or points (ii) and (iii) will win out. This is settled by our first result, which says one should never clean up immediately.

Proposition 1 Suppose M [meber of] {[M.sub.1], [M.sub.2], [M.sub.3], [M.sub.4]}. For any n [greater than or equal to] 2, we have [m.sub.opt](n) > 1.

This is not hard, and we provide two proofs: one by computing F(1) and F(2) explicitly in each model (see Section 2.3), and another by observing F(m) < F(1) whenever m < 4b(n - m) (see Lemma 2.13).

In Section 3, we restrict ourselves for simplicity to the case of complete memory and numbered shelves (model [M.sub.4]). An upper bound for [m.sub.opt] is not too difficult, since after the pile is a certain size, each search will have an associated cost that is at least F(1). Specifically, we show

Proposition 2 Let M = [M.sub.4]. Forn [greater than or equal to] 1, [m.sub.opt](n) < 4b(n) [less than or equal to] 4 [log.sub.2](n + 1).

The problem of obtaining a good lower bound seems much more difficult, and we use some bounds shown in Section 4 to construct an approximation F(m) for F(m) such that the (smallest if not unique) value [m.sub.opt](n) (see Section 2 for the definition) of m which minimizes F(m) should satisfy [m.sub.opt](n) [less than or equal to] [m.sub.opt](n) (Conjecture 4.2). While we can compute [m.sub.opt](n) for fairly large n fairly quickly, the amount of time required to compute [m.sub.opt] (n) is significant, so we can only compare values of these functions for relatively small n (see Table 4.1), but it appears that [m.sub.opt](n) ~ [m.sub.opt](n). Given that this is the case, one would like to determine [m.sub.opt](n).

Theorem 3 (Theorem 4.3) For any n [greater than or equal to] 5, we have

3b(n) - 2/3 [less than or equal to] [m.sub.opt](n) < 3b(n) + 1/2

i.e., forn [greater than or equal to] 5, [m.sub.opt](n) equals [??]3b(n) - 3/2[??] or [??]3b(n) - 3/2[??] + 1.

This leads us to the following conjecture about our original problem.

Conjecture 4 (Conjectures 4.2 and 4.5) Let M = [M.sub.4]. For n [greater than or equal to] 5, we have

[m.sub.opt](n) [greater than or equal to] 3b(n) - 3/2,

and, asymptotically,

mopt(n) ~ 3b(n) ~ 3 [log.sub.2](n).

We briefly touch on the amount of cost savings in this optimization problem in Remark 4.8.

Finally, in Section 5, we make some comments about the problem for non-uniform distributions. In particular, we expect that, as one varies the underlying distribution, [m.sub.opt] (n) is maximized for the uniform distribution.

Remark 1.6 Based on the above factors, one would expect that the optimal cleanup point should be greater in the case of complete memory versus no memory, as well as in the case of unnumbered shelves versus numbered shelves. Consequently, we expect that

[m.sub.opt](n; [M.sub.1]) [less than or equal to] [m.sub.opt](n; [M.sub.2]) [less than or equal to] [m.sub.opt](n; [M.sub.4]) [less than or equal to] [m.sub.opt](n; [M.sub.3]).

We verified this numerically for small n, but we do not focus on this here. In particular, we note that preliminary numerics for [M.sub.3] suggest [m.sub.opt](n) ~ 4 [log.sub.2](n) (Remark 4.7). (In this paper, by "numerical calculations" we mean that we used high-precision floating point calculations in PARI/GP, and not to mean that our calculations were provably correct.)

2 Expectation costs

In this paper, m and n denote integers satisfying 1 [less than or equal to] m [less than or equal to] n. Further, unless otherwise specified, [chi] will denote a path in [[chi].sub.m]. If f is a function on [[chi].sub.m], we sometimes denote E[f] by [E.sub.m][f] to specify m, or [E.sub.m,n] [f] if we want to specify both m and n.

In this section, we decompose

F(m) = [F.sub.L](m) + [F.sub.P](m) + [F.sub.C](m),

where the terms on the right will represent average list search, average pile search and average cleanup costs. We will analyze these terms individually. (In the case of no memory, where one does a list search then a pile search for an object X [member of] P, we include both of these search costs in the function [F.sub.P].) It appears that [F.sub.L] and [F.sub.C] are increasing in m, whereas [F.sub.P] is decreasing in m (cf. Remark 2.7 and Lemma 2.8). We also expect that F is unimodal--initially decreasing, then increasing. Thus our optimization problem is about the question of when [F.sub.P] begins increasing faster than [F.sub.L] + [F.sub.C] decreases.

2.1 Expected search cost

In this section, we want to find a way to calculate E [s/e]. We can reduce this to studying averages of the form [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.1)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2)

Namely, note the probability that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] depends only on l, and is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.3)

Hence

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.4)

Proposition 2.1 We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] denotes the Stirling number of the second kind, i.e., the number of ways to partition a set of l elements into m nonempty subsets.

Proof: Note [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is in bijection with the set of pairs ([alpha], X) where [alpha] is a sequence of length I - 1 in X consisting of m - 1 distinct elements, and X is an element of X not occurring in [alpha]. The number of such [alpha] is simply the number of surjective maps from {1,... , l - 1} to {1,... , m - 1} which is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by the twelvefold way, times the number of possibilities for the set of m - 1 distinct elements in [alpha], which is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Observing that for each such [alpha] there are n - (m - 1) distinct choices for X gives the stated result.

As an aside, we note this provides a proof of the identity (cf. [4, Thm 2.11])

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.5)

since [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Write

S([chi]) = [S.sub.L]([chi]) + [S.sub.P]([chi])

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In other words, [S.sub.L]([chi]) (resp. [S.sub.P]([chi])) is the total cost of searches along [chi] when the sought-after object is in L (resp. P).

The action of the symmetric group Sym(X) on X induces an action on [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Namely, for [sigma] [membere of] Sym(X), put

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Also, for [chi] [member of] [[chi].sub.m], we put [[tau].sub.[chi]](j) to be the number of times one searches along [chi] for an object in P when P has size j. Explicitly, set [t.sub.0] = [t.sub.0]([chi]) = 0 and, for 1 [less than or equal to] j [less than or equal to] m, let [t.sub.j] = [t.sub.j]([chi]) be the minimal integer such that |[P.sub.[chi]]([t.sub.j])| = j. Then for 0 [less than or equal to] j < m, we set [tau].sub.j]([chi]) = [t.sub.j+i]([chi]) - [t.sub.j]([chi]) - 1.

Lemma 2.2 For any [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], we have the following average cost formulas:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof: To see the first equality, observe that for any [chi], there must be exactly one search for an object [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] which is in L when L has n - j objects for each j = 0,1,... , m - 1. Fixing one such j and averaging the contribution of this search cost over the permutations [sigma] yields [s.sub.L](n - j).

The second equality is similar.

This yields the following expected cost formulas:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Consequently, one has

Lemma 2.3

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.2 Expected cleanup cost

The expected cleanup cost per item is simply

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [C.sub.m] is as in (1.10) or (1.11) according to whether the shelves are numbered or not.

With this notation, the expected search-and-cleanup cost is

F(m) = [F.sub.L](m) + [F.sub.P](m) + [F.sub.C](m).

Note that in the case of numbered shelves, we have

[F.sub.C](m) = [F.sub.L](m).

In the case of unnumbered shelves,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Consequently, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We remark that this implies [F.sub.C](m) [less than or equal to] [F.sub.L](m).

In any case, we have reduced our problem to studying the expected list search cost [F.sub.L](m) and [F.sub.P](m).

2.3 Some simple calculations

Here, we calculate F(1; n) and F(2; n) for each of the four models discussed above, which we hope will be instructive. In all cases, these calculations, together with the observation (2.6) that E2 [1/e] < [1/2], imply that F(2; n) < F(1; n) for all n [greater than or equal to] 2, giving one proof of Proposition 1. We remark the proof of this inequality does not depend on the specific definitions of the functions b(j) and [b.sub.f] (j), just that these functions are increasing. In fact, it does not depend upon the definition of s(j) either, as long as s(1) [greater than or equal to] 0.

Calculations form= 1

First consider m = 1. Then [X.sub.1] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] = {(1), (2), (3),... , (n)}. Consequently E[1/e] = 1 and [F.sub.p](1) = 0.

2.3.1 Unnumbered shelves

Suppose we have unnumbered shelves, i.e., [M.sub.1] or [M.sub.3]. Then

F(1; n) = [F.sub.L](1; n) + [F.sub.C](1; n) = b(n) + [b.sub.f](n - 1).

2.3.2 Numbered shelves

Suppose we are in the case of numbered shelves, i.e., [M.sub.2] or [M.sub.4]. Then

F(1;n) = 2[F.sub.L](1;n) = [2s.sub.L](n) = 2b(n).

Calculations form = 2

Nowtakem = 2. Then, for l [greater than or equal to] 2, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] consists of the [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] sequences ofthe form ([X.sub.1], [X.sub.1],... , [X.sub.1], [X.sub.2]), where there are l - 1 occurrences of [X.sub.1]. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.6)

As an aside, we note that the equality in (2.6) yields the closed form expression

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.7)

Since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.3.3 No memory, unnumbered shelves

Suppose M = [M.sub.1], so

F(2; n) = [F.sub.L](2; n) + [F.sub.P](2; n) + [F.sub.C](2; n).

Here [F.sub.L](2;n) = E [1/l] (b(n) + b(n - 1)), [F.sub.C](2; n) = E [1/e] ([b.sub.f](n - 1) + [b.sub.f](n - 2)), and [s.sub.P](1) = [b.sub.f](n - 1) + s(1), so

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.3.4 No memory, numbered shelves

Suppose M = [M.sub.2], so

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.3.5 Complete memory, unnumbered shelves

Suppose M = [M.sub.3], so

F(2; n) = [F.sub.L](2; n) + [F.sub.P](2; n) + [F.sub.C](2; n). Here [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and [s.sub.P](1) = s(1), so

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.3.6 Complete memory, numbered shelves

Suppose M = [M.sub.4], so

F(2; n) = 2[F.sub.L](2; n) + [F.sub.P](2; n).

Here [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [s.sub.P](1) = s(1), so

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.4 Expected list search cost

We now return to studying search costs, in particular we consider the expected list search and cleanup costs, [F.sub.L](m) and [F.sub.C](m). Since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

studying these quantities reduces to studying

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.8)

Note this is the expected value of the reciprocal of a waiting time for a sequential occupancy problem. First we obtain the following finite formula, which allows us to compute [E.sub.m] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] quickly.

Lemma 2.4 We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.9)

Note the first term can be interpreted as the j = 0 term for the sum on the right.

Proof: The generating function for Stirling numbers of the second kind is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.10)

Hence

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.11)

We compute the integral using partial fractions.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and so we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.12)

We can simplify this sum a little by observing that m! [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].Then the first part of the above summation simplifies to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Putting all this together yields the lemma.

Since the above formula is an alternating sum, it not so useful in studying the behavior of [E.sub.m] [1/e] as m varies, which is our goal, though it is useful for numerics.

Now we observe some elementary bounds.

Lemma 2.5 For 1 [less than or equal to] m [less than or equal to] n,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(We interpret the leftmost term as 0 when m = n.)

Proof: As is well known, [SIGMA] is a sum of m independent geometric distributions with means n/n, n/n-1],... , [n/n-m+1] Thus

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.13)

where [H.sub.j] is the j-th harmonic number. This implies the first inequality. The second is Jensen's inequality. The third follows as l([chi]) [greater than or equal to] m for any [chi] [member of] [[chi].sub.m].

If n [right arrow] [infinity] and m grows slower than [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], l [right arrow] [E.sub.m] [l] in probability [1]. Thus we might expect [1/[E.sub.m][l]] to be a good approximation for [E.sub.m] [1/l], as the following bound shows.

Lemma 2.6 For 1 < m < [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For 1 < m [less than or equal to] n,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof: Let x > 0. Chebyshev's inequality tells us

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Consequently,

[E.sub.m] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any y [greater than or equal to] [Var(e)/[x.sup.2]]. Note

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Set x = [n/m-m+1] so [E.sub.m] [l] - x = [E.sub.m-1] [l]. Now we apply (2.14) with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Observe

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which is negative if m < [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and one gets the first part. The second part follows from the crude bound

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Remark 2.7 (i) The preceding two lemmas imply that [E.sub.m] [1/l]] is decreasing in m for 1 [less than or equal to] m < [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. However, note that it follows easily that [E.sub.m] [1/l] is decreasing in m for all m [greater than or equal to] 1 because of the inequality of probabilities, [[mu].sub.m](l [less than or equal to] X) [less than or equal to] [[mu].sub.m]-1(l [less than or equal to] X) for all X.

(ii) In fact we expect the first part of Lemma 2.6 to hold for all 1 < m [less than or equal to] n, as well as the stronger bound Em [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which appears true numerically. This stronger bound together with Lemma 2.5 would imply

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.15)

which means that for any of our four models M, the expected list search and cleanup costs, [F.sub.L](m) and [F.sub.C](m), are strictly decreasing in m. Furthermore, numerics suggest that the sequence [1/[E.sub.m[l]]] decreases in m faster than the sequence [E.sub.m] [1/l]. For instance, it appears

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Lemma 2.8 Fix m [greater than or equal to] 1, and let e > 0. Then for sufficiently large n,

[F.sub.L](m;n) > [F.sub.L](m + 1;n) - [member of] and [F.sub.C](m;n) > [F.sub.C](m + 1;n) - [member of].

In the case of unnumbered shelves, we may take [member of] = 0, i.e., for [m.sub.0] sufficiently small relative to n, [F.sub.L](m; n) and [F.sub.C](m; n) are decreasing in mfor 1 [less than or equal to] m < [m.sub.0].

Proof: It is easy to see that (e.g., by Lemma 2.5 or [1]), for fixed m, [1/[E.sub.M,N][l]] and [E.sub.m,n] [1/l] are increasing sequences with limit 1/m. This implies that for n sufficiently large

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This implies the lemma as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and these differences can be bounded away from 0 in the unnumbered case.

2.5 Expected pile search cost

Now we consider the expected pile search cost

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

First we remark the following explicit formula for the expected values in the inner sum.

Lemma 2.9 For 1 [less than or equal to] j [less than or equal to] m - 1, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof: If l = m, then [[tau].sub.j] ([chi]) = 0 for all [chi] [member of] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the formula trivially holds, so suppose l > m and let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. If r = [[tau].sub.j]([CHI]) [greater than or equal to] 1, we can remove the element at position [t.sub.j+1] - 1 to get an element [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This map from [chi] to [chi]' is a j-to-1 surjective map, i.e., for j, r [greater than or equal to] 1 we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Summing over all r [greater than or equal to] 1, we see

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Similarly if r [greater than or equal to] k we can remove the last k elements before position [t.sub.j+i] to get a [j.sup.k]-to-1 map into [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.16)

Now observe that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and apply Proposition 2.1.

Consequently, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.17)

This expression allows us to get the following upper bound.

Proposition 2.10 For 1 [less than or equal to] j [less than or equal to] m - 1, the covariance Cov([[tau].sub.j], 1/l) < 0, i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof: From (2.17) we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

That this is equivalent to the condition of negative covariance asserted above follows as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

implies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Remark 2.11 Suppose n [greater than or equal to] 4 and 1 [less than or equal to] j < m < n. Then numerically it appears that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This would imply that [F.sub.p](m) is increasing in m, and, in the case of complete memory,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Lastly we note

Lemma 2.12 We have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof: This follows from the observation that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.6 Reinterpreting F(m)

From above, we can rewrite

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.18)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

denotes the average total (search plus cleanup) cost of taking an object out of L. This expression yields the following interpretation (cf. Lemma 2.12): we can think of m[E.sub.m] [1/l] as the probability that a given search will cost s* (m), and [E.sub.m] [[[tau].sub.j]/l] as the probability that a given search will cost [s.sub.P(j).

It is easy to see that m [E.sub.m] [1/l] = 1 if and only if m = 1, so we have F(1) > F(m) whenever m > 1 satisfies [s.sub.P](m - 1) < s*(m). This yields

Lemma 2.13 Let 1 < m [less than or equal to] n.

1. In the case of unnumbered shelves, ifm < 4b(n - m), then F(m) < F(1).

2. In the case of numbered shelves, ifm < 4b(n), then F(m) < F(1).

3 An upper bound

For simplicity now, we will assume we are in model [M.sub.4] (complete memory, numbered shelves), though a similar argument can be used for [M.sub.2] as well. In this case we have

F(m) = 2[F.sub.L](m) + [F.sub.P](m).

Recall that F(1) = 2b(n). Thus, once the pile P has more than 4b(n) elements, a single average pile search must cost more than F([m.sub.opt](n)). This idea gives the following upper bound.

Proposition 3.1 Suppose n [greater than or equal to] 1. Then [m.sub.opt](n) < 4b(n).

We will first prove a lemma. We say a function f : [X.sub.m] [right arrow] R is additive if, for any [chi] = ([[chi].sub.1],... , [[chi].sub.l]) and any 1 [less than or equal to] k < l, we can write f as a sum of terms

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the f([[chi].sub.j]; [P.sub.[chi]](j - 1)) depends only upon [[chi].sub.j] and what is in the pile before time j. We can naturally restrict such functions f to functions of [[chi].sub.k] for k < m. Note that all the cost functions we considered above are additive, and any linear combination of additive functions is additive.

Let m [greater than or equal to] k [greater than or equal to] 1. Define a restriction map [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be the truncated tail from the restriction map, i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Put [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] to be the pile after time [t.sub.k]([chi]).

Lemma 3.2 Suppose f : [[chi].sub.m] [right arrow] R is additive and m > k [greater than or equal to] 1. If

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.1)

for all [chi] [memberf of] [[chi].sub.m] and [X.sub.i] [member of] X, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3.2)

If, further, the inequality in (3.1) is strict for some [chi] and [X.sub.i], then the inequality in (3.2) is also strict.

Proof: Since f and l are additive, we can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then the above condition guarantees, for any [chi],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof Proof of Proposition 3.1: Set k = [??]4b(n)[??] and let k < m [less than or equal to] n. Let [[bar.S].sub.L]([chi]) (resp. [[bar.S].sub.P]([chi])) be the average of [S.sub.L]([[chi].sup.[sigma]]) (resp. [S.sub.P]([[chi].sup.[sigma]])), where [sigma] ranges over Sym(X). Let f = 2[[bar.S].sub.L] + [[bar.S].sub.P], so F(m) = [E.sub.m][f/l].

Then note that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Furthermore, this inequality must be strict for some [chi] (in fact, one only gets equality when n + 1 is a power of 2 and [t.sub.j] ([chi]) = j for j < k).

On the other hand, for any [X.sub.i] [member of] X, we have f([X.sub.i];[P.sub.[chi],k]) [greater than or equal to] 2b(n) (with equality if [X.sub.i] [??] [P.sub.[chi],k]). Applying the above lemma, we see F(m) > F(k).

4 An approximate problem

Here we make a conjectural lower bound and asymptotic for [m.sub.opt] (n) by comparing our problem with a simpler optimization problem. We continue, for simplicity, in the case of M 4, though similar approximate problems could be considered for [M.sub.1], [M.sub.2] and [M.sub.3] also.

Based on the bounds for [E.sub.m] [1/l] and [E.sub.m] [[[tau].sub.j]/l]] above, we consider the approximate expectation cost

F(m) = 2[F.sub.L](m) + [F.sub.P](m),

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Specifically, Lemma 2.5 and Proposition 2.10 imply [F.sub.L](m) [greater than or equal to] [F.sub.L] (m) and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We suspect that the approximation [F.sub.p](m) is much closer to this upper bound for [F.sub.P](m) than [F.sub.P](m) itself is, and so we should have [F.sub.P](m) [less than or equal to] [F.sub.P](m). This is supported by numerical evidence. (See Table 4.1 for some numerical calculations.) Moreover, since conjecturally [F.sub.L](m) is decreasing in m (and, empirically, faster than [F.sub.L](m) is), while [F.sub.P](m) is increasing in m (and, empirically, slower than [F.sub.P](m)), we make the following conjecture.

Let [m.sub.opt](n) be the value of m [member of] {1,2,... ,n} which minimizes F(m). We call the problem of determining [m.sub.opt] (n) an approximate search with cleanup problem.

Remark 4.1 Recall that for [m.sub.opt](n), we used the function b(n) from (1.5), is an exact average search cost if n is a power of 2 and approximate otherwise (since we are working with a complete memory model here, [b.sub.f](n) does not arise here). It is then natural to wonder whether using an exact formula for b(n) for all n makes much of a difference. This is relevant for small n since, but should not matter for asymptotics for large n. However, choosing b(n) as in (1.5) does not make much of a difference for small n either; see the emboldened entries the third row in Table 4.1 for the differences. The optimal value in the case of exact average search cost functions is labeled [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII](n).

Conjecture 4.2 Let M = [M.sub.4]. Then [m.sub.opt](n) [greater than or equal to] [m.sub.opt](n).

Theorem 4.3 For any n [greater than or equal to] 5, we have

3b(n) - 3/2 ^ [m.sub.opt](n) < 3b(n) + 1/2.

In other words, for n [greater than or equal to] 5, [m.sub.opt](n) is either [??]3b(n) - 3/2[?] or [??]3b(n) - 3/2[??] + 1.

We note that in fact both possibilities of this proposition occur: sometimes [m.sub.opt] (n) is \3b(n) - 3/2 ] and sometimes it is [??]3b(n) - 3/2] + 1, though numerically there seems to be a tendency for [m.sub.opt](n) to be in the right half of this interval, i.e., most of the time [m.sub.opt](n) > 3b(n) - 1/2.

Proof: Let 1 [less than or equal to] m < n. We want to investigate when the difference

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.1)

is positive, i.e., when is F(m) is decreasing in m? The above expression is positive if and only if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since [E.sub.m+1][l] - [E.sub.m][l] = n/n-m, this is equivalent to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.2)

The left hand side of (4.2) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

whereas the right hand side of (4.2) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Hence (4.2) is positive if and only if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.3)

Lemma 4.4 Let a, b, c [member of] Z with a > 1 and c [greater than or equal to] max{a, b}. The sum

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is negative ifb [less than or equal to] a-2/3. This sum is positive ififb > a/3 and c [greater than or equal to] 5[a.sup.2].

Proof: All nonzero terms of the sum are positive (resp. negative) if b [greater than or equal to] a (resp. b [less than or equal to] 0), so assume 0 < b < a. Now note the above sum is negative if and only if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.4)

This is certainly the case if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Writing d = a - b, we see this is true if

b(b + 1)(3d + 2b + 1) [less than or equal to] [d.sup.3] - d,

which holds if b [less than or equal to] d/2 - 1, i.e., if b [less than or equal to] a-2/3.

Similarly, the sum in the lemma is positive if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

i.e., if

b(b + 1)(3d + 2b+1) [greater than or equal to] c/c-a ([d.sup.3] - d).

Suppose b [greater than or equal to] a/3, i.e., b [greater than or equal to] d/2. Then the above inequality is satisfied if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which holds if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which holds if

c - a/4a [greater than or equal to] a [greater than or equal to] a - b = d.

This holds if c [greater than or equal to] 5[a.sup.2] [greater than or equal to] [4a.sup.2] + a.

Now applying the first part of this lemma with a = m, c = n and b = [??]4b(n) - m-1] [less than or equal to] 4b(n) - m, we see

m [greater than or equal to] 3b(n) + 1/2 [??] F(m) < F(m + 1),

hence [m.sub.opt](n) < 3b(n) + 1/2.

Similarly, applying the second part of the lemma with a = m, c = n and b = [??]4b(n) - m - 1[??] [greater than or equal to] 4b(n) - 2 we see

m< 3b(n) - 3/2 and n [greater than or equal to] 5[m.sup.2] [??] F(m) > F(m + 1).

Note m < 3b(n) -3/2 implies 5[m.sup.2] [less than or equal to] n when 45[(b(n) -1.5).sup.2] [less than or equal to] n. If n is large so that 45[(b(n) -1.5).sup.2] [less than or equal to] n, then we have [m.sub.opt](n) [greater than or equal to] 3b(n) - 3/2. This is satisfied if n > 4050. When n [less than or equal to] 4050, one can compute directly that (4.3) holds for all m < 3b(n) - 3/2. Lastly, note that n > 3b(n) - 3/2 for n[greater than or equal to]5.

Hence, we should have 3b(n) - 3/2 [less than or equal to] [m.sub.opt](n) < 4b(n). Here only the lower bound is conjectural. At least for n small, the table above suggests [m.sub.opt](n) is to be a very good approximation for [m.sub.opt](n). This suggests that [m.sub.opt] (n) grows like [m.sub.opt] (n) plus some term of smaller order, and we are led to

Conjecture 4.5 As n [right arrow] [infinity], we have the asymptotic [m.sub.opt](n) ~ 3b(n).

[FIGURE 1 OMITTED]

Remarks

4.6 From Table 4.1, we note that for n [less than or equal to] 8, one should not clean up until all the objects are in the pile. One might ask for n [less than or equal to] 8 if one should ever clean up, i.e., is F(n) at least less than the cost of an average pile search, n+1/2 ? Calculations show this is only true for n = 7 and n = 8, i.e., one should never clean up if n [less than or equal to] 6. (This entire remark does not depend on whether one uses the approximate average search cost function for b(n) in (1.5) or replaces b(n) with the exact average search cost as discussed in Remark 4.1.)

4.7 For [M.sub.s], calculations also say that [m.sub.opt](n) = nfor n [less than or equal to] 8, but here it is only better to never clean up ifn [less than or equal to] 2. Furthermore, calculations suggest that [m.sub.opt](n) ~ 4b(n) for [M.sub.3].

4.8 To see how close F(m) is to F(m), we plotted both for n = 20 in Figure 1. Note that there is significant cost savings to be had by waiting until [m.sub.opt] to clean up. However, for large n, the graph of F(m) will be more lop-sided, i.e., the right endpoint F(n) will be much larger than the left endpoint F(1). It is not feasible to compute all values of F(m) for some large n, but we graph F(m)forn = 100 in Figure 2. We expect that the graph of F(m) will have a similar shape, and that for large n, the cost savings of waiting until [m.sub.opt] to clean up is proportionally smaller. However, the cost savings should be more pronounced for non-uniform distributions.

5 Non-uniform distributions

Finally we comment on the problem for general probability distributions on X. Now if one defines the cost functions S(X; P) and C(P) using algorithm A as in Section 1.2, these cost functions do not just depend upon the multiset of probabilities {[mu]([X.sub.i])}, but upon the specific distribution.

[FIGURE 2 OMITTED]

Example 5.1 Fix 1 [less than or equal to] r [less than or equal to] n and 0 [less than or equal to] [member of] [less than or equal to] 1. Now take the distribution given by [mu]([X.sub.r]) = 1 - [member of] and [mu]([X.sub.i]) = [memberf of]/(n - 1). Assuming e is small, then most of the time one will be searching for [X.sub.r]. Depending on what r is, the search (as well as cleanup) cost associated to [X.sub.r] might be as low as 1 or as high as [b.sub.f](n) [approximately equal to] [log.sub.2](n). Hence, at least for certain values of [member of] and n, one might expect the answer to the associated search with cleanup optimization problem depends upon the choice of r.

Therefore, we define our cost functions not using the exact search costs given by algorithm A, but rather on the associated average search costs. Specifically, in the complete memory case, we set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5.1)

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5.2)

In the case of the uniform distribution on X, this gives us the same optimization problem we studied above.

Note that in the case of no memory, it may be better to always search the pile first, depending on how skewed the distribution is. For instance, in Example 5.1, if [member of] is sufficiently small, then with high probability at any t [greater than or equal to] 1, we will be looking for [X.sub.r] and it will be in the pile. Thus we should always search the pile first. Furthermore, by this reasoning (in either the complete or no memory case), for [member of] small enough, we should clean up whenever another object gets in the pile, i.e., [m.sub.opt](n) = 2.

Consequently, we can decompose the average total cost as in the uniform case

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5.3)

though now the quantities [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] will be more complicated. In this case, the probability functions for the underlying Markov process will follow more general sequential occupancy distributions (see, e.g., [6] or [4]).

Note that for a nonuniform distribution, typically objects with higher probabilities will be in the pile at any given time, so the pile search costs will be higher than in the uniform case. Put another way, the expected waiting time [E.sub.m][l] until cleanup is minimized for the uniform distribution (see, e.g., [7], [5], [3] and [2] for results on [E.sub.m][l]). Therefore, the more skewed the distribution is, the faster the probabilities [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] should be increasing in m, i.e., the smaller [m.sub.opt](n) should be, as indicated in our example above. In particular, we expect [m.sub.opt] (n) is maximized for the uniform distribution.

Acknowledgements

It is a pleasure to thank our colleague Alex Grigo for helpful comments and suggestions. We thank the referee whose careful reading has helped improve the exposition. We also thank our parents for never making us clean up our rooms.

References

[1] Leonard E. Baum and Patrick Billingsley. Asymptotic distributions for the coupon collector's problem. Ann. Math. Statist, 36:1835-1839, 1965.

[2] Petra Berenbrink and Thomas Sauerwald. The weighted coupon collector's problem and applications. In Computing and combinatorics, volume 5609 of Lecture Notes in Comput. Sci., pages 449-458. Springer, Berlin, 2009.

[3] Shahar Boneh and Vassilis G. Papanicolaou. General asymptotic estimates for the coupon collector problem. J. Comput. Appl. Math., 67(2):277-289, 1996.

[4] Charalambos A. Charalambides. Combinatorial methods in discrete distributions. Wiley Series in Probability and Statistics. Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, 2005.

[5] Philippe Flajolet, Daniele Gardy, and Loys Thimonier. Birthday paradox, coupon collectors, caching algorithms and self-organizing search. Discrete Appl. Math., 39(3):207-229, 1992.

[6] Norman L. Johnson and Samuel Kotz. Urn models and their application. John Wiley & Sons, New York-London-Sydney, 1977. An approach to modern discrete probability theory, Wiley Series in Probability and Mathematical Statistics.

[7] Harmindar B. Nath. On the collector's sequential sample size. Trabajos Estadist. Investigacion Oper, 25(3):85-88, 1974.

Kimball Martin (*) Krishnan Shankar ([dagger])

Department of Mathematics, University of Oklahoma, Norman, Oklahoma, USA

received 10th Jan. 2015, revised 11th May 2015, accepted 15th June 2015.

(*) Supported in part by Simons Collaboration Grant 240605.

([dagger]) Supported in part by NSF Grant DMS #1104352.

A Notation guide

Section 1.1

X a set of n objects (the books) [X.sub.1],... , [X.sub.n]

[mu] a probability measure on X (and later [[chi].sub.m[)

L a sorted list (the shelves)

P an unsorted list (the pile)

[[chi].sub.m] = [[chi].sub.m,n] the finite sequences (paths) of objects in X consisting of m distinct objects, where the last object is distinct from the previous ones

[chi] a path in [[chi].sub.m]

[[chi].sub.t] the t-th object in [chi]

l([chi]) the length of [chi]

[P.sub.[chi]] (t) the set of objects in P at time t along path [chi]

S(X; P) the search cost for object X [member of] L [union] P given a certain pile P

C(P) the cleanup cost for a certain pile P

S([chi]) the total search cost along path [chi]

C([chi]) the cleanup cost for path [chi]

F(m) = F(m; n) the average total per-search cost for cleaning up when |P| = m

[m.sub.opt] (n) = [m.sub.opt] (n; M) the argument which minimizes F(m)

Section 1.2

[M.sub.1] the no memory, unnumbered shelves model

[M.sub.2] the no memory, numbered shelves model

[M.sub.3] the complete memory, unnumbered shelves model

[M.sub.3] the complete memory, numbered shelves model

A a search algorithm for the model

b(j) the average case successful binary search cost on a sorted list of length j

[b.sub.f] (j) the average case failed binary search cost on a sorted list of length j

s(j) the average case sequential search cost on a list of length j

[s.sub.L] (j) the average cost to search for an element of L when the list size is j

[s.sub.P] (j) the average cost to search for an element of P when the pile size is j

[C.sub.m] the average cleanup cost for a pile of size m

Section 2.1

E[f] = [E.sub.m] [f] = [E.sub.m,n] [f] the expected value of a function on [[chi].sub.m,n]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the paths in [[chi].sub.m] of length l

[F.sub.S] (m) the expected search cost per search

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the Stirling number of the second kind

[S.sub.L]([chi]) the contribution to S([chi]) from searches for objects in L

[S.sub.P] ([chi]) the contribution to S([chi]) from searches for objects in P

Sym(X) the symmetric group on X

[[Tau].sub.j] ([chi]) the number of times one does j -element pile search along [chi]

Section 2.2

[F.sub.L] (m) the expected value of [S.sub.L] per search

[F.sub.P] (m) the expected value of [S.sub.P] per search

[F.sub. ](m) the expected cleanup cost per search

Section 4

F(m), [F.sub.L](m), [F.sub.P](m) certain approximations for F(m), [F.sub.L](m), [Fs.ub.P](m)

[m.sub.opt] (n) the argument minimizing F(m)

(i) By list, we mean an indexed list, rather than a linked list.

(ii) For a discussion of how this affects the problem for smaller values of n, see Remark 4.1.

Tab. 4.1: Small values of [m.sub.opt], [m.sub.opt] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] n 1 2 3 4 5 6 [m.sub.opt] 1 2 3 4 5 6 [m.sub.opt] 1 2 3 4 5 6 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 1 2 3 4 5 6 n 7 8 9 10 11 [m.sub.opt] 7 8 8 8 9 [m.sub.opt] 7 8 8 8 9 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 7 8 8 9 9 n 12 13 14 15 16 [m.sub.opt] 9 9 9 9 10 [m.sub.opt] 9 9 10 10 10 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 9 9 10 10 10 n 17 18 19 20 [m.sub.opt] 10 10 10 11 [m.sub.opt] 10 10 11 11 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 10 11 11 11 n 21 22 23 24 25 [m.sub.opt] 11 11 11 11 11 [m.sub.opt] 11 11 11 12 12 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 11 12 12 12 12 n 26 27 28 29 30 [m.sub.opt] 12 12 12 12 12 [m.sub.opt] 12 12 12 12 12 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 12 12 12 12 13 n 31 32 33 34 35 [m.sub.opt] 12 12 12 13 13 [m.sub.opt] 13 13 13 13 13 [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] 13 13 13 13 13

Printer friendly Cite/link Email Feedback | |

Author: | Martin, Kimball; Shankar, Krishnan |
---|---|

Publication: | DMTCS Proceedings |

Article Type: | Report |

Date: | Jan 1, 2015 |

Words: | 11620 |

Previous Article: | On substitution tilings of the plane with n-fold rotational symmetry. |

Next Article: | Vertex-coloring edge-weighting of bipartite graphs with two edge weights. |

Topics: |