Printer Friendly
The Free Library
23,383,293 articles and books


The size of random fragmentation intervals.

Two processes of random fragmentation of an interval are investigated. For each of them, there is a splitting probability at each step of the fragmentation process whose overall effect is to stabilize the global number of splitting events. More precisely, we consider two models. In the first model, the fragmentation stops which a probability p witch can not depend on the fragment size. The number of stable fragments with sizes less than a given t [greater than or equal to] 0, denoted by K(t), is introduced and studied.

In the second one the probability to split a fragment of size x is p(x) = 1 - [e.sup.-x). For this model we utilize the contraction method to show that the distribution of a suitably normalized version of the number of stable fragments converges in law. It's shown that the limit is the fixed-point solution (in the Wasserstein space) to a distributional equation. An explicit solution to the fixed-point equation is easily verified to be Gaussian.

Keywords: Fragmentation models, fixed point, contraction method, Mellin transform.

1 Introduction

Fragmentation is a widely studied phenomena [12] with applications ranging from conventional fracture of solids [6] and collision induced fragmentation in atomic nuclei/ aggregates [1] to seemingly unrelated fields such as disordered systems [4] and geology. Fragmentation processes are also relevant to spin glasses, Boolean networks and genetic population.

We investigate two classes of stochastic fragmentation processes. In section 2 we consider a random fragmentation process introduced by Krapivsky, Ben-Naim and Grosse [5] where fragmentation stops stochastically, with a probability q of further fragmentation that not depend on the mass x of the fragment. In section 3, we consider the random fragmentation process investigated by Janson and Neininger [16], where splitting probability is, by nature, fragments length dependent. A particle having some mass x is broken, with probability p(x) = 1 - [e.sup.-x], into two pieces. The mass is distributed among the pieces at random in such a way that the proportions of the mass shared among different daughters are specified by some given probability distribution (the dislocation law). The process of fragmentation is repeated recursively for all pieces. For this model we derive a closed form expression of the mean and variance of the total number of stable fragments N(x). Finally, using the contraction method in continuous times established by Janson Neininger [16], we prove a central limit Theorem of N(x).

2 Homogeneous random fragmentation process

Let ([[xi].sub.1], [[xi].sub.2], ..., [[xi].sub.m]) be a given random vector. We assume that each [[xi].sub.j] has a distribution that is absolutely continuous on (0,1) and

[m.summation over (j = 1)] [[xi].sub.j] [??] 1,

i.e., that ([[xi].sub.1], ..., [[xi].sub.m]) belongs to the simplex.

We start with an interval of length 1. At step one, there is a probability p [member of] (0, 1) of splitting the interval and so, with probability q := 1 - p the initial unit fragment remains unchanged for ever. In the case of splitting, we obtain m (m [greater than or equal to] 2) fragments with random sizes ([[xi].sub.1], ..., [[xi].sub.m]). On each first generation sub-fragment, the splitting process is then iterated, independently, a property which we refer to in the sequel as the renewal structure of the process. Similar processes have been investigated by Krapivsky, Ben-Naim and Grosse [5]. The term homogeneous refers to the fact that in such models, the splitting probability is independent of fragment sizes at each step. This model can be understood from the discrete time binary Galton-Watson branching process.

Let N be the total number of stable fragments.

2.1 The mean

By the recursive nature of the process, we can write, almost surely

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

where [([N.sup.(j)]).sub.{1[less than or equal to]j[less than equal to]m} are independent copies of N and independent of ([[xi].sub.1, [[xi].sub.2] ..., [[xi].sub.m]). Hence, E(N) the mean of N is given by the following proposition

Proposition 1.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The average total number of fragments diverges as the probability p approches the critical point [p.sub.c] = 1/m, reflecting the critical nature of the corresponding branching process.

Remark: Let [[psi].sub.N](t) = E([e.sup.tN]), t [member of]] - [epsilon], [epsilon][ for some [epsilon] > 0.

To obtain moment of order k, it suffices to derive k times [[psi].sub.N]. Using equation (1), one can note that, the function [[psi].sub.N](t) is the solution of the equation

[[psi].sub.N](t) = [(q[e.sup.t] + p[[psi].sub.N](t)).sup.m], [absolute value of t] < [epsilon].

2.2 Fragment length density

Fort t [member of] [0, 1], let K(t) be the total number of stable fragments of length less than t, and M(t) = E(K(t)). For y [greater than or equal to] 0 let K(y, t) be the total number of stable fragments of length less than t given by a fragment of length y and let M(y, t) = E(K(y, t)). The recursive nature of the process can be used to obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Moreover, for every s, t [member of] [0,1], we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Consequently,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where [F.sub.j] (respectively [f.sub.j]) is the distribution function (respectively the density) of the random [[xi].sub.j.] With the help of the Mellin transform, Equation (2) can be solved, and we get:

[??](s) = [[integral].sup.+[infinity]].sub.0] [t.sup.s-1]M(t)dt.

It is clear that M(t) is well defined and piecewise-continuous verifying

[for all]t [member of] [0,1], M(t) [member of] [0, E(N)].

Then the Mellin transform of M exists in the half of complex plane

S = {s [member of] C, Re(s) > 0}.

Denoting, for each j [less than or equal to] m, by [[??].sub.j] the Mellin transform of [f.sub.j]. The Mellin transform of M reads

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

Using the Mellin inverse, we deduce that, for all c > 0 and t [member of] [0,1]

M(t) = 1 / 2i[pi] [[integral].sup.c+i[infinity].sub.c-[infinity]] [??](s)[t.sup.-s]ds.

2.3 Applications

2.3.1 Uniform m fragmentations

In this subsection each unstable fragment with size l splits into m fragments, using m - 1 independent, uniformly distributed cut points in the interval [0, 1].

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Let [xi] be a generic spacing distributed like [[xi].sub.1]. It is known that [xi] has the density

f(S) = f[xi](s) = (m - 1)[(1 - s).sup.m-2].

Thus

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

From (3) one can obtain that, for all s [member of] S

[??](s) = E(N)((s + 1) ... (s + m - 1) - (m - 1)!) / s((s+ 1) ... (s + m - 1 )- pm!). (5)

2.3.2 Uniform 3-fragmentation case

In the particular case, when m = 3, we get an explicit expression for M(t). In fact from equation (4), we Get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

Let P(t) be the fragment length density of stable fragments of length t. It follows

M(t) = [[integral].sup.t.sub.0] P(x)dx.

Due to the last equality and (6), it follows

P(t) = 6q - 6qt + 6p [[integral].sup.1.sub.t] P(s) / s ds - 6pt [[integral].sup.1.sub.t] P(s) / [s.sup.2] ds. (7)

Then one can observe that the total length is conserved,

[[integral].sup.1.sub.0] tP(t) = 1.

Furthermore by integrating (7) we get the value of E(N) already given by Proposition 1,

[[integral].sup.1.sub.0] P(t)dt = E(N).

The Mellin transformation [??] can be obtained, in this case from (5) choosing m = 3

[??](s) = E(N)(s + 3) / (s + 1)(s + 2) - 6p.

Using the Mellin inverse, it leads to following Theorem Theorem 1 For all t [member of] [0,1],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Remark: Denoting by [lambda](s, t), for all 0 [less than or equal to] s [less than or equal to] t [less than or equal to] 1, the mean of the number of stable fragments with length between s and t. It can be given by

[lambda](s, t) := M(t) - M(s).

3 Exponential fragmentation probability

Throughout this section we assume that the first segment is of length x (large enough) and the probability p(s) that a new fragment remains unstable depends on the fragment size s. This is relevant for impact fragmentation and DNA segmentation where fragments have an intrinsic size scale below which the fragmentation probability becomes negligible.

If p(s) = 1[I.sub.s [greater than or equal to] 1], m = 2 and [xi] is uniform on [0, 1], this model has been studied by Sibuya and Itoh [15]. Recently Janson and Neininger [16] studied the general case where p(s) = 1[I.sub.s [greater than or equal to] 1], m [greater than or equal to] 2 and the support of the distribution of ([[xi].sub.1], [[xi].sub.2, ..., [[xi].sub.m]) on the standard simplex has an interior point.

In our model, we assume that, m = 2, [[xi].sub.1] is uniform on [0, 1] and

p(s) = 1 - [e.sup.-s].

It is natural to consider the fragmentation process as a tree T([infinity]), with the root representing the original object (first fragment of size x), its children representing the pieces of the first fragmentation, and so on. We let the fragmentation go on for ever, although we ignore what happens to stable pieces. Let us mark each node with the mass of the corresponding object. A node is unstable if the corresponding fragment is unstable. We define the fragmentation tree T(x) to be the subtree of T([infinity]) consisting of all unstable nodes.

As mentioned in [16], our model can help to approximate the binary search tree by a fragmentation tree. In fact, for binary search trees, we have n random (uniformly distributed) points in an interval, split the interval by the first of these points, and continue recursively splitting each subinterval that contains at least one of the points. If we scale the initial interval to have length n, then the probability that a subinterval of length x contains at least one point is [approximately equal to] 1 - [e.sup.-x].

Let N(x) be the total number of stable fragments when we start with an interval of length x (N(x) is also the number of nodes in T(x)) and

m(x) = E(N(x)), V(x) = Var(N(x)).

By the recursive construction of the fragmentation process, we have, almost surely

N(x) = 1[I.sub.{x stable}] + 1[I.sub.{x unstable}] [2.summation over (j=1)] [N.sup.(j)([[xi].sub.j]x), (8)

where [([N.sub.(j)](.)).sub.j[greater than or equal to]]1 are copies of N(.), independent of each other and of ([[xi].sub.1], [[xi].sub.2]).

3.1 The mean

By equation (8), the mean m(x) satisfies

m(x) = [e.sup.-x] + 2(1 - [e.sup.-x]) / x [[integral].sup.x.sub.0] m(t)dt. (9)

By the nature of the process, we have [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The solution of the equation (9) can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where

[psi](x) = 2(1 - [e.sup.-x]) / x exp ([[integral].sup.x.sub.0] 2(1 - [e.sup.-t]) / t dt).

Consequently

Theorem 2 As x [right arrow] [infinity]

m(x) = 2[gamma]x + O(1), (10)

with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Remark: If each unstable interval is divided at (b-1) randomly chosen points, that is, (b-1) independent points from the uniform [0, x] distribution, in this case the mean function m (x) satisfy the following differential equation

[d.sup.b-1]/[dx.sup.b-1]([x.sup.b-1]m(x)) - b!(1 - [e.sup.-x])m(x) b![e.sup.-x].

3.2 The variance

Let S(x) = [1l.sub.{x stable]} and V(x) = Var(N(x)). In view of (8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Finally, V (x), satisfies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Let H(t) = h(t) - 2(1 - [e.sup.-t]/t. Then, the solution of equation (11) can be given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Consequently, we have the following Theorem, which describes an asymptotic expression of V(x), for large values of x

Theorem 3 Asymptotically, as x large enough, we have

Var(N(x)) = 2[lambda]x - 2/x + O([e.sup.-x/2]), (12)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

3.3 Limit law

Let

[N.sub.*](x) = N(x) - m(x)/[square root of V(x)].

Based on some heuristics in the structure of the problem, a solution is guessed for the limit distribution of [N.sub.*](x). The guess is then verified by showing convergence of the distribution function to that of the guessed limit in some metric space. The contraction method was introduced by Rilisler [14]. Rachev and Rilischendorf [9] added several useful extensions. General contraction theorems and multivariate extensions were added by Rilisler [11], and Neininger [8]. Rilisler and Rilischendorf [13] provide a valuable survey. Recently general contraction theorems and multivariate extensions were added, with continuous parameter, by Janson and Neininger [16].

By the recursive decomposition of the process N(x), the mean rn(x) satisfies

m(x) = [e.sup.-x] + (1 - [e.sup.-x]) [2.summation over (j=1)]E(m([[xi].sub.j]x)). (13)

We start from the recursive decomposition (8), adapted in the form (using (13))

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [xi] is a random variable with law [U.sub.[0, 1]]. The last equation can be written as

[N.sub.*](x) = [2.summation over (j=1) [N.sup.(j).sub.*]([[xi].sub.j]x) [square root of V([[xi].sub.j]x)/V(x)] + D (x), (14)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In order to apply Theorem 5.1 [16], we introduce the map T on the space M of probability mesures on R by

T : M [right arrow] M

T[mu] [??] [2.summation over (j=1] [square root of [[xi].sub.j][X.sup.(j)] (15)

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for j = 1, 2. We need the following result

Proposition 2 As x [right arrow] [infinity], we have

(i) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(ii) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(iii) [2.summation over (j=1) E([[square root of [[xi].sub.j].sup.3] = 4/5.

To prove this Proposition we need the following Lemma

Lemma 1 For all x > 0, E ([[N(x)].sup.3]) < [infinity]. Furthermore [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] E ([[N(y)].sup.3]) < [infinity].

Proof:

For j = 2, 3, let [m.sub.j](x) = E([[N(x)].sup.j]) and

L(x) = 6(1 - [e.sup.-x])E [[m.sub.2]([xi]x)m((1 - [xi])x)] - 2(1 - [e.sup.-x]/x.

By equation (8),

[m.sub.3](x) = L(x) + (2(1 - [e.sup.-x])/x exp ([[integral].sup.x.sub.0] 2(1 - [e.sup.-t])/t dt) + 2(1 - [e.sup.-x])/x [[integral].sup.x.sub.0] L(t) exp ([[integral].sup.x.sub.t] 2(1- [e.sup.-y])/y dy)dt.

This implies that

[m.sub.3] (x) [infinity].

The final statement follows because 0 [less than or equal to] N(y) [less than or equal to] N(x) when 0 [less than or equal to] y [less than or equal to] x.

Proof of Proposition

(i) For x large enough we have, almost surely V([[xi].sub.j]x)) [less than or equal to] 1. Then [for all]j [member of] {1, 2}, by the dominated convergence theorem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using Lemma 1, there exists a constant c such that

[[parallel][1I.sub.{x stable}] [2 summation over (j=1)] [N.sup.(j).sub.*]([[xi].sub.j]x)[square root of V([[xi].sub.j]x)/V(x)[parallel].sub.3] [less than or equal to] [ce.sup.-x/3]

(ii) Is trivially using Theorem 3.

(iii) The random variables [[xi].sub.1] and [xi.sub.2] have the same law the (0,1) uniform distribution, then

[2.summation over (j=1)] E ([[square root of [[xi].sub.j]].sup.3] = 2 [[integral].sup.1.sub.0] [t.sup.3/2]dt = 4/5.

Since the conditions of Theorem 5.1 [16] are satisfied one can derive the following result.

Proposition 3 [N.sub.*] (x) converges in distribution to a limit [N.sub.*],

where L([N.sub.*]) is the unique fixed point of T given in (15) subject to

E([N.sup.3.sub.*]) < [infinity], E ([N.sub.*]) = 0 and Var([N.sub.*]) = 1.

It's not difficult to prove that the law N(0, 1) is a fixed point of the map T, leading to our last Theorem.

Theorem 4 As x [right arrow] +[infinity], we have a version of the central limit Theorem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Acknowledgments.

I thank the University of Versailles Saint Quentin for supporting a research visit. Thanks also are due to anonymous referees for comments that greatly helped us improving the presentation of our results.

References

[1] Campi X., Krivine H., Sator N. and Plagnol E. (2000). Eur. Phys. J D., 11, 233.

[2] Devroye L. (1987). Branching processes in the analysis of the height of trees. Acta Informatica, 24, 277-298.

[3] Devroye, L. (1986). A note on the height of binary search trees. Journal of the ACM, 33, 489-498.

[4] Derrida B. and Flyvbjerg H. (1987). J. Phys. A , 20, 5273.

[5] Krapivsky, P.L., Grosse, I., Ben-Naim, E. (2004). Stable distributions in stochastic fragmentation. JPhys. A: Math. Gen., 37, 2863-2880.

[6] Lawn B.R. and Wilshaw T.R. (1975). Fracture of Brittle Solids. Cambridge University Press, Cambridge.

[7] Neininger, R., Rilischendorf, L. (2004). A general limit theorem for recursive algorithms and combinatorial structures. Ann. Appl. Probab, 14, 378-418.

[8] Neininger, R. (2001). On a multivariate contraction method for random recursive structures with applications to Quicksort. Random Structures Algorithms, 19, 498-524.

[9] Rachev, S. and Ruschendorf, L. (1995). Probability metrics and recursive algorithms. Advances in Applied Probability, 27, 770-799.

[10] Rachev, S.T. (1991). Probability Metrics and the Stability of Stochastic Models. Wiely, New York.

[11] Rilisler, U. (2001). On the analysis of stochastic divide and conquer algorithms. Algorithmica, 29, 238-261.

[12] RednerFor S. (1990). Statistical Models for the Fracture of Disordered Media, ed. H.J. Herrmann and S. Roux (Elsevier Science, New York).

[13] Rosler, U. and Ruschendorf, L. (2001). The contraction method for recursive algorithms. Algorithmica, 29, 3-33.

[14] Rilisler, U. (1991). A limit theorem for "Quicksort". RAIRO Inform. Theor Appl., 25,85-100.

[15] Sibuya M., Itoh Y (1987). Random sequential bisection and its associated binary tree. Ann. Inst. Satist. Math., 39, 69-84.

[16] Svante 7., Neininger, R.(2007). The size of random fragmentation trees. Probab. Theory Relat. Fields, to appear.

[17] Zolotarev, V .M. (1977). Ideal metrics in the problem of approximating the distributions of sums of independent random variables (Russian). Teor verojatnost. i Primenen, 22, 449-465.

([dagger]) Research supported by the Ministry of higher education, scientific research and technology (research units: 99/UR/15-10 and UR04DN04)

Raft Aguech ([dagger])

Faculte des Sciences de Monastir, Departement de Mathematiques, 5019 Monastir. Tunisia.
COPYRIGHT 2008 DMTCS
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Aguech, Rafik
Publication:DMTCS Proceedings
Article Type:Report
Geographic Code:6TUNI
Date:Jan 1, 2008
Words:3290
Previous Article:A Markov chain algorithm for determining crossing times through nested graphs.
Next Article:Volume laws for boxed plane partitions and area laws for Ferrers diagrams.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters